## Sunday, 29 January 2012

### Flattr: should we use it in academia?

I have just found the service that I have been looking for for a while: Flattr. It's a social micropayment platform, an evolution of "Buy me a coffee" buttons that started appearing on blogs a few years ago.
It allows you to donate and distribute a small amount of money each month to authors of content that you "liked" that month.

Here's how it works: you sign up on Flattr, top up your account with a small amount, let's say $5 a month, then you come back to my blog and press the Flattr button on the right. At the end of the month, your$5 will be distributed equally to authors of things you Flattr'ed, so I get my share of your \$5. This way, the small contributions add up, and the community can reward authors of content they frequently read and who inspire them. I personally think, details and implementation apart, this general idea is brilliant. I'm definitely going to top up my account and suggest people, whose stuff I read to add a flattr button.

I actually first thought of such a social redistribution system in terms of academic funding, and almost launched a buy me a coffee button on my publications page to make a point. Now, having found Flattr, I started thinking again about how could such a system be used for a more equal redistribution  of funding in academia. Let's review some current inefficiencies in academic research funding:

Inflexible fellowship values: Money in academia is often awarded in discrete chunks, especially for junior positions. You apply for various grants, fellowships, scholarships, and you either get the whole thing for x years or you get nothing. If you for example got a Gates Scholarship to study for a PhD in Cambridge, you got it for three years, the amount you get is practically fixed for 3 years (other than following inflation), you can't cease to be a Gates scholar if you do poorly, you can't become one if you perform better. It's pretty much fixed and independent of your performance for 3-4 years. This lack of adaptation may generate random inequalities: there are some fellowships that are only awarded every second year for example; it's close to impossible to get a decent academic salary in certain countries, independently of the quality of research being done.

Poor metrics: In my opinion academia struggles with a big unsolved problem: How to attribute authority, how to measure and reward someone's contribution, inspiration, impact? The most widespread metrics are number of citations, h-index, impact factors,
eigenfactorsetc. These are crude, heuristic measures of influence and authority. They certainly make some intuitive sense and vaguely correlate with what we really wanted to measure, but these are not a solution, just complicated features. If we want a fairer system we have to establish new standards for measuring influence/importance/etc. (Well, first of all we have to agree what we want to measure)

Discrete random variables: When measuring output, there are way too many discrete variables involved: Is your paper accepted? Yes or No? Does this paper cite relevant papers of yours appropriately? Yes or No? Which journal did your stuff appear in? These binary decisions rectify the opinion formed about your paper: if your paper is accepted, it means some undisclosed set of reviewers who looked at it didn't find a mistake and liked it. But how much did they like it? Were they absolutely amazed by it, or did they just not care enough to reject it? Were the reviewers respected academics, or were they incompetent? What does the rest of the community think? If I want to state that in my opinion a particular piece of research represents huge step forward in machine learning, pretty much my only option to measurably reward the authors is to cite their paper, but maybe my work is on something completely different, so it's inappropriate.

All in all, the measurement of scientific output is inherently inefficient because of these binary/discrete decisions involved. Everybody with engineering background knows that discretisation means loss of information: measurement of academic output is discretised rather arbitrarily, with little consideration given to effective information transfer. And this loss of bandwith constrains the system of academic rewards to be either slow and accurate (you have to grow old to get recognition) or fast but inaccurate (you will be judged on the basis of a few noisy random variables). Often I think it's both slow and inaccurate :)

Does Flattr suggest a solution? Can something like Flattr help improve academic funding and assignment of credits? Imagine you get a fellowship, and you have to re-allocate 5% of your earnings each month to fellow scientists according to how important their contributions are to your field? Can this redistribution be done in such a way that equilibrium salaries are commensurate with the value of each scientist? Can we create an incentive-compatible academia, where it is in everyone's best interest to honestly report whose research they find valuable? Let me know what you think in comments.

#### 3 comments:

1. It is hard to imagine this being the primary component of funding in academia, as it might be difficult for a new scientist to get off the ground in such an environment. However, this seems like a great supplemental source of funding for not only scientists, but writers, programmers, and others who can make their creative output available on the web.

I've long dreamed of a day when it would be possible to allocate my payments for software, books, and articles based on some measure of the "enjoyment" I got out of them. Considering the state of neuroscience however, I think something like Flattr is about as close as we can expect to get in the next few decades.

2. And for another interesting idea about science funding, check out idea #1 of this blog post: http://djstrouse.com/four-big-ideas-from-the-open-science-summit-2010/

1. Interesting idea.

I think - and I'm sure many have pointed out - the problem with ordinary taxpayers directly microfinancing academia is that those people will not necessarily know what projects are worth financing. It would probably introduce some bias towards projects that are easy to explain, that is away from basic research and complicated sounding stuff which has as yet unpredictable success in applications. How do you, an average taxpayer allocate between two projects: "I'm going to build a new kind of solar panel tomorrow" vs. "I'm going to research complicated materials which one day may (or may not) revolutionise solar panels". Or "I'm going to work on scene segmentation" vs. "I'm going to develop an open source version of kinect for XBox". These decisions are simply too hard for the taxpayer. With proper control, this may work, but I guess that's the role that "government" and peer review is meant to play.

It's hard even for peer scientists to establish relative value of research, but I think they are still in a better position than ordinary citizens. Peer review has its merits, it's just sitting in the middle of a poor and inefficient implementation.