Why data should be FAIR
FAIR is a fairly recent concept that stands for ‛Findable, Accessible, Interoperable and Reusable’. On the face of it, these principles don’t seem so remarkable. But what sets it apart, compared to other (earlier) open data models, is that the emphasis has shifted from the human researcher to machines.
Data should not just be FAIR to humans but also to machines. After all, as humans we can only absorb that much data. But machines aren’t limited in that way: their processing memory can always be extended. Consequently, we can reach a point where we don’t have to be limited anymore by the amount of data (the volume and variety parts of Big Data’s four Vs). This, in turn, holds huge promise for more innovation and better insights.
Imagine any type of data being ‛Findable, Accessible, Interoperable and Reusable’ by both humans and machines. The possibilities to discover new insights in hitherto unknown and unexpected places multiplies manifold. We may finally be able to make far greater strides forward than ever before in challenging healthcare areas such as cancer treatment, rare diseases and so much more. And FAIR data can also mean the world to literally every other industry.
But before we get there, we all have to become better data custodians and data stewards. Data isn’t FAIR because you open it up to others. We should all become much more disciplined in opening data the correct way, i.e. the FAIR way. It’s great to see that more and more (government-funded) institutions as well as other organizations are opening up their data. When we all continue to do so, the chances of doing good through data will multiply further and speed up.
Join the FAIR hackaton
And let’s all continue to spread the FAIR word!
Wilkinson, M. D. et al. The FAIR Guiding Principles for scientific data management and stewardship. Sci. Data 3:160018 doi: 10.1038/sdata.2016.18 (2016). https://www.nature.com/articles/sdata201618