14:00

Prem Devanbu seminar’s

At UC Davis, we discovered (a decade ago) that, despite the considerable power and flexibility of programming languages, large software corpora are actually even more repetitive than NL Corpora. We went on to show that this “naturalness” of code could be captured in statistical models, and exploited within software tools. This line of work enjoyed a tremendous boost from the high-capacity and flexibility of deep learning models. Numerous other creative and interesting applications of naturalness have ensued, from colleagues around the world. More recently, we have focused on another property of software: it is bimodal. Software is written not only to be run on machines, but also read by humans; this makes it amenable to both formal analysis, and statistical prediction. Bimodality allows new ways of training machine learning models, and new ways to understand the human experience of programming. We argue (with some examples) that Bimodality has been useful in a "Shelf-stable" way through multiple generations of model scales, paradigms, and training approaches.

English
Amphi LaBRI