So on April 1st Anton and I posted to the group's blog about a fascinating project that we'd been working on in the preceding month. We had been using "advanced machine learning techniques" to conduct sentiment analysis on scientific articles to see if they contain irrefutable evidence for various breakthroughs such as a working quantum computer or (from our own field) Majorana zero modes. Try it for yourself below!
To the untrained eye it even looks pretty plausible: there's a flashy animation when the "predicting" happens, and it even shows you the name of the article that you asked about. Of course digging even a little bit into the source code reveals that we're using a somewhat simplistic model (no chance that we're overfitting here!), nevertheless we manage to get 100% accurate results!
What surprised me the most was that Anton was managing to sustain conversations with colleagues about this "project" and people seemed to be treating it seriously! Of course it's entirely possible that they were just playing along, and that we were, in fact, the ones who were being trolled.
In any case it was a fun little project for me, as my main goal was to test out the Elm language for building frontends for webapps. We'd switched to the React framework for the rewrite of our Zesje grading software but I was eager to see what pure functional programming could bring to the game with respect to managing state. Although in principle I find the idea of modelling a webapp as a well-defined state machine appealing, in practice I found that for such a small project the hoop-jumping was more hassle than it was worth.
I was also interested in seeing how easy it would be to write a small web API in Haskell.
The excellent Haskell From First Principles uses the Scotty web framework in several examples, so I thought
I'd give that a go. Again, I think that the limited scope of the project really hindered any possible gains that pure functional
programming could provide. Even if pure functional programming gives an asymptotic advantage (in terms of development time and
confidence about a codebase), the relatively large prefactor associated with getting anything done is really significant.
For example, I had to research and import 3 separate network-related libraries to be able to serve the API (
HTTP 400 responses (
Network.HTTP.Types) and send web requests to other APIs (
Network.Wreq), in addition to another library
(Lens) for accessing attributes from the responses from the
Wreq library (really). Sometimes it feels like Haskell makes
the simple things much more complicated than they need to be (even if it does make some complicated things easier).