Let me begin with four recent stories:

Story 1
The election of Donald Trump was quite surprising: how could such a controversial figure reach the White House? The reasons, of course, are innumerous. But what if one of them was Facebook? After all, Trump supporters never stopped using this platform to spread out disputed contents. What if voters were brainwashed by “fake news” Facebook contributed to diffuse? What if this extensive interlinking participated in Trump’s advertisement and fundraising? However harsh this claim might be, it seriously harms the image of this Web application that would rather help to “connect people” than to build border walls. It seems then that monitoring needs to be increased, even though it may contradict some assumptions Mark Zuckerberg elevates as precepts. The main target is the “News Feed,” the central column of the application that displays stories posted by Facebook users. What about slightly modifying how News Feed automatically selects new stories in order to make it ignore “low quality posts”? This may help to restore Facebook’s image, at least a little bit. After several months of research and testing, a new algorithm is now operational that – based on frequencies of posts and URLs of links – identifies spam users and automatically deprioritize the links they share. According to Facebook’s vice-president, this new method of computation should significantly reduce the diffusion of “low quality content such as clickbait, sensationalism, and misinformation."
Story 2
Mars is a distant location. But hundreds of millions of kilometers did not dishearten NASA from sending the robotic rover Curiosity to explore its surface. On May 6, 2012, the costly vehicle safely lands on Gale Crater. Quite a feat! Amazing high-resolution pictures are soon available on NASA’s website, showing the world the jagged surface of this cold and arid planet. Of course, Curiosity is far more than a remote-controlled car taking exotic pictures. It is a genuine laboratory on wheels with many high-tech instruments: two cameras for true-color and multispectral imaging, two pairs of black-and-white cameras for navigation, a robotic arm with an ultra-high-definition camera, a laser-induced spectrometer, solar panels, two lithium-ion batteries, and so on. Yet there is an obvious cost to this amazing remote-controlled laboratory: it needs to move its dry 900 kilograms. The sharp, rocky surface of Mars does not alleviate the constant efforts of Curiosity’s wheels, which irremediably wear down. Already in January 2014, the situation has become alarming: is there a way to extend the lifetime of Curiosity’s wheels? After several months of research and testing, a new driving algorithm becomes operational that uses real-time data from the navigation cameras to adjust Curiosity’s speed when it comes to sharp Martian pebbles. By reducing the load of Curiosity’s leading and middle wheels up to 20% and 11%, respectively, this new method of computation for navigation may be a serious boost for the mission.
Story 3
Israeli secret services in the West Bank are used to dismantling organizations they define as “terrorists” by means of “preventive” actions and intimidation. But what about individuals who commit attacks on a whim? Just like several police departments in the United States, Israeli secret services are now supported by a type of security software whose algorithm generates profiles of potential attackers based on aggregated data posted on social media. Yet while several American civil courts are now seriously considering the potential bias of these new methods of computation, Israeli military justice as applied to suspected Palestinian “attackers” prevents them from having any sort of legal protection. Thanks to the ability of the West Bank military commander to stamp administrative detentions, these “dangerous profiles” can be sentenced to renewable six-month incarceration without any possibility of appeal. Many Palestinians targeted by this state-secret technology “have served long years without ever seeing a court.”
Story 4
How can people be made to eat more Nutella? These last years have not been easy for the Italian brand of chocolate jam. When palm oil production threatened remote orangutans, only a small fraction of citizens was eager to criticize its use in Nutella’s recipe. But as soon as palm oil becomes suspected of speeding up the spread of cancer among European Nutella consumers, there starts to be a worrying drop in sales. For Nutella, something needs to be done to reconnect with the stomachs of its customers. What about a fresh new marketing campaign? In collaboration with advertising agency Ogilvy & Mather Italia, seven million uniquely designed Nutella jars are soon produced and sold in record time. At the heart of this successful marketing move is an algorithm that computes a carefully selected set of colors and figures in order to generate unique pop patterns.

What a mess! States of affairs, apparantly, change. News Feeds of Facebook users were first subjected to spammers’ diffusing hoaxes and “fake news” that are presumed to have played a role in the election of Donald Trump. These News Feeds soon became, temporarily, monitored lists of stories worth being read. Similarly, Curiosity’s weight together with sharp Martian pebbles first seriously affected the robot’s wheels, thus compromising the initial duration of the mission. Yet a few changes in the locomotion system soon started to slow down this unexpected wear. In another case, Israeli secret services were at first powerless against attacks not prepared within dismantable cell organizations. Yet these services soon became able to identify suspects and put them in jail without any kind of civil procedure. Finally, Nutella was first an old-fashioned chocolate jam whose recipe included cancer-related palm oil. It then became, temporarily, a personalized pop product. For better or worse, collective configurations are rearranged, thus constituting surprising new states of affairs; relationships between humans and non-humans are reconstituted, thus temporarilly establishing new networks. The collective world – our world – is shaped in such a way: constantly reshaped in many ways.

That being said, we may wish to comprehend some of the dynamics of these messy rearrangements. After all, as we all have to co-exist on the same planet, getting a clearer view of what is going on could not hurt; documenting a tiny set of the innumerous relationships that constitute the world we live in may equip us with some kind of navigational instrument. Together, where do we go? What are we doing? What is going on? These are – I believe – important questions.

In order to address these questions, two approaches are generally used. Broadly speaking, the first approach consists in postulating the existence of aggregates capable of inducing states of affairs. Depending on academic traditions, such aggregates take different names: they are sometimes called “social classes,” “fields and habitus,” “cultural habits,” or “social structures,” among many other variations. These differently named yet a priori postulated aggregates are all pretenders to the definition of the social (or society), an influential yet evanescent matter that supposedly surrounds individuals and orientates their actions. The scientific study of this matter and the states of affairs it engenders is what I call the science of the social or, more succinctly, social science.

The second approach – the one I try to embrace – consists in considering the social not as an evanescent matter surrounding individuals but as the small difference produced when two entities come into contact and temporarily associate with each other. This approach postulates that every new connection between two actants – humans (Bob, the president, Mark Zuckerberg) or non-human entities (a wheel, a dream, some legal documents) – makes a small difference that can sometimes be accounted for. If we accept calling “social” the small difference produced when two actants temporally associate with each other, we may call “socio-logy” the activity that consists in producing texts (logos) about these associations (socius). Our initial four stories are small examples of such an activity: Facebook, Curiosity, Israeli secret services, and Nutella temporarily associate themselves with new entities, and the blending of these new connections contributes to the formation of new configurations summarized within a text. Had we added several rearrangements and accounted for their constitutive associations a little more thoroughly, we would have produced a genuine socio-logical work. On the contrary, had we invoked some hidden force in order to explain these reconfigurations; had we attributed the modifications of each state of affairs to some a priori postulated aggregate (e.g., individual rationality, society, culture), we would have produced a small work of social science. This distinction between socio-logy and social science will accompany us throughout this website. It is thus important to keep in mind that my work is – or, at least, is intended to be – socio-logical.

With these clarifications in mind, let us consider our four small socio-logical exercises. What do we see? We quickly notice that each of the four states of affairs is affected by an “algorithm” – for now, loosely defined as a computerized method of calculation – which, in its own way, contributes to modifying a network of relationships. In every rearrangement, one specific algorithm – well-supported by many other elements (researchers, data, tests, etc.) – participates in making Facebook less subject to the spread of hoaxes (Story 1), Curiosity’s wheels a bit more durable (Story 2), Palestinians radically more “jailable” (Story 3), and Nutella temporarily more salable (Story 4). Along with all the entities they are associated with, these methods of computation then seem to participate in changing power dynamics: Facebook, Curiosity’s wheels, Israeli security services, and Nutella become temporarily stronger than Trump-spamming supporters, sharp Martian pebbles, West Bank potential “terrorists,” and palm oil scandals, respectively.

Scholars of Science & Technology Studies (STS) – a subfield of socio-logy that aims at documenting the co-constitution of science, technology and the collective world – nowadays tend to study algorithms’ propensity to modify power dynamics. What do algorithms do? What do they produce? Where does their strength come from? These are questions my colleagues and I tentatively try to answer. Check out some of my propositions!


What do I do? I've just completed my PhD in Social Study of Science & Technology at the University of Lausanne, Switzerland. The rationale behind this PhD project was the following: if many Science and Technology Studies are interested in the effects algorithms have on society, very few try to document the causes of these effects. As a consequence, algorithms tend to be depicted as powerful and inscrutable entities we should either adore or reject. But what about understanding a little more where algorithms come from and how they are constituted? Would it not potentially provide empirical grips for more constructive discussions about and with algorithms? This was at least the hypothesis of my PhD project: by ethnographically inquiring into a computer science laboratory for digital image processing, I tried to better document the process by which algorithms slowly come into existence and, eventually, sometimes, produce differences in the common world. If this ethnographic method first appeared risky – I had to become competent in computer science in order to observe, interact, and contribute adequately to the laboratory life – the final results are, I hope, quite interesting. What are they, these results?

It first appears that image-processing algorithms (and potentially many others) are existentially linked to referential repositories computer scientists call “ground truths.” These ground truths – that generally take the shape of databases gathering input-data and desired output-targets – define the terms and solutions of algorithmic problems. These ground truths do not appear ex nihilo: they have to be manually designed and shaped by the computer science research community during what I call problematization processes. Once assembled, these ground truths are divided into two subsets: a training set and an evaluation set. The training set is used to extract numerical features capable of automating the transformation of the set’s input-data into their correlated output-targets. Once extracted from the training set and translated into machine-readable lists of instructions, the automated transformation of input-data into output-targets – that often borrow from certified mathematical claims – is confronted to the evaluation set. This confrontation produces performance indicators generally expressed in terms of precision and recall measures. The results of these performance evaluations are then used to certify the efficiency of algorithms within academic papers. The centrality of ground truths for the design, evaluation, publication, and, therefore, the instauration of image-processing algorithms (and potentially many others) strongly suggest that, to a certain extent, we get the algorithms of our ground truths. This powerful proposition has been recently summarized in a paper published in the journal Social Studies of Science.

My PhD thesis also accounts for these crucial moments where computer scientists try to write lists of instructions in order to make computers compute digital data in some desired ways. Accounting for these computer programming practices without which there would be no operating algorithms was not an easy task... But the results were – I believe – worth the efforts. It appeared indeed that most behavioral and cognitive studies of computer programming dramatically miss the point: a programmer may never solve any problem. Rather, when attached to a scenario that makes her affect the trajectory of a very swift entity (e.g. an interpreter, a compiler), a programmer tries to assemble a chain of reference that, if equipped enough, may in turn indicate some problematic location within her script. This localization triggers in turn the enrollment of new actants (e.g. new variables, conditional statements), hence constituting technical workaround of impasses. Hopefully, this alternative practice-based conception of computer programming will soon be more thoroughly presented in an academic paper.

My PhD thesis finally documents the practices supporting the enrollment of certified mathematical claims that help to the transform input-data of ground truths’ training sets into their correlated – and already defined – output-targets. Carefully accounting for these formulating practices makes one realize the practicality of theoretical work. It is indeed by successively translating training sets into simpler spreadsheets that some connections with the flat “paper” world of certified mathematical knowledge can eventually be established. In short, the relationships between input-data and output-targets computer scientists try to reproduce have to be trans-formed in order to become mathematicable: from their initial forms as complex training sets, these relationships have first to be re-arranged into compiled spreadsheets, then into graphs, then into other less-fuzzy graphs until acquiring shapes that suit some well-documented mathematical functions. These functions, as parametrized by the data of the training sets, are in turn used to define the scenarios of further computer programming episodes. The three main insights of my PhD project appear in turn intimately related: formulating practices rely on, and sometimes influence, ground-truthing practices that themselves are supported by programming practices that are themselves sometimes irrigated by the results of formulating practices. These things we call “algorithms” and with which we intimately interact may thus be considered, to a certain extent, as uncertain products of three interrelated activities: ground-truthing, programming and formulating.

If you want to know more about these weird (but empirically grounded) propositions, don’t hesitate to contact me to get the full dissertation.


Papers (peer-reviewed)

Camus A, Jaton F, Oberhauser PN and Vinck D (2018, accepted) Localités distribuées, globalités localisées : Actions, actants et médiations au service de l’ethnographie du numérique. Symposium.

Jaton F (2017) We get the algorithms of our ground truths: Designing referential databases in digital image processing. Social Studies of Science 46(7) : 811-40.

Jaton F and Vinck D (2016) Unfolding frictions in database projects. Revue d'anthropologie des connaissances 10(4) : a-m. DOI : 10.3917/rac.033.a.

Jaton F et Vinck D (2016) Processus frictionnels de mises en bases de données. Revue d’Anthropologie des Connaissances 10(4) : 489-501. DOI : 10.3917/rac.033.0489.

Jaton F y Vinck D (2016) Procesos friccionales de puesta en bases de datos. Revue d'anthropologie des connaissances 10(4) : I-XVI. DOI : 10.3917/rac.033.i.


Jaton F (2017) The Constitution of Algorithms. Ground-Truthing, Programming, Formulating. PhD Thesis, University of Lausanne, Switzerland. Winner of 2018 Société Académique Vaudoise Award for the Best PhD Thesis of the University of Lausanne.

Jaton F (2013) Ethnographie d'un projet d'architecture. Une expérimentation sur le mode de l'acteur-réseau. Master Thesis, University of Lausanne, Switzerland.

International Conferences

Jaton F (2017) Ground truths: Designing referential databases for image-processing algorithms. In: 2017 Meeting of the Society for Social Studies of Science, Boston, MA, 30 August - 2 September, 2017.

Jaton F (2015) Towards the ethnography of computational devices: The performativity of ‘ground truths’ in computer science. In: 2015 Meeting of the Society for Social Studies of Science, Denver, CO, 11-14 November, 2015.

Jaton F (2015) Résultats préliminaires d’une ethnographie de laboratoire : Comment attester des performances de son algorithme ?. In : 2015 Congrès de la société suisse de sociologie, Lausanne, Suisse, 3-5 June, 2015.

Jaton F (2014) Acteur-réseau, humanités digitales et modes d’existence. In : 2014 Colloque AISLF : Science, Innovation, Technique et Société, Bordeaux, France, 9-11 July, 2014.

Reports (mostly in French)

Jaton F (2015) Encore une base de données sur des manuscrits ! Approches pragmatiques des manuscrits de la mer Morte. Miméo, compte rendu de l’exposé de David Hamidovic lors du séminaire du LaDHUL du 26 mars 2015.

Jaton F (2015) Représentation de la migration non-documentée tunisienne (harga) sur Internet. Approches méthodologiques pour une anthropologie de Facebook. Miméo, compte rendu de l’exposé de Monica Salzbrunn et Simon Mastrangelo lors du séminaire du LaDHUL du 3 mars 2015.

Jaton F (2015) Big Data : À qui profite le Même ? Miméo, compte rendu de l’exposé de Sami Coll lors du séminaire du LaDHUL du 15 février 2015.

Jaton F (2014) Du corpus numérisé au matériau. L’exemple du Montreux Jazz Digital Project. Miméo, compte rendu de l’exposé d'Alexandre Camus lors du séminaire du LaDHUL du 11 décembre 2014.

Jaton F (2014) Big Data Challenges, Opportunities and Avenues of Research. Miméo, compte rendu de l’exposé de Periklis Andritsos lors du séminaire du LaDHUL du 24 novembre 2014.

Jaton F (2014) Du jeu de données à la multiplication des sources. Remise en cause d’un paradigme de recherche ?. Miméo, compte rendu de l’exposé de Dominique Joye lors du séminaire du LaDHUL du 28 octobre 2014

Jaton F (2014) Constitution d’une base de données de dessins de dieux réalisés par des enfants. Miméo, compte rendu de l’exposé de Pierre-Yves Brandt lors du séminaire du LaDHUL du 29 avril 2014.

Jaton F (2014) Approches sociologiques des bases de données. Miméo, compte rendu de l’exposé de Dominique Vinck et Pierre-Nicolas Oberhauser lors du séminaire du LaDHUL du 31 mars 2014.

Jaton F (2014) Le projet Lumières.Lausanne. Miméo, compte rendu de l’exposé de Bela Kapossy et Marion Rivoal lors du séminaire du LaDHUL du 20 mars 2014.

Jaton F (2014) Bases de données relationnelles (notions et souvenirs d’un amateur): une approche STS. Miméo, compte rendu de l’exposé d'Andréas Perret lors du séminaire du LaDHUL du 30 janvier 2014.

Jaton F (2013) Mémoires Falashas. Miméo, compte rendu de l’exposé de Charlotte Touati lors du séminaire du LaDHUL du 3 décembre 2013.

Jaton F (2013) Bases de données en sciences humaines: création et pérennisation. Miméo, compte rendu de l’exposé de Nicolas Bugnon lors du séminainre du LaDHUL du 30 septembre 2013.


I've just completed my PhD in Social Study of Science & Technology at the University of Lausanne, Switzerland. The title of this dissertation is the following: The Constitution of Algorithms. Ground-Truthing, Programming, Formulating. In May 2018, I was surprised to see that this thesis was awarded the 2018 Société Académique Vaudoise Price for the Best PhD Thesis of the University of Lausanne. Quite an honnor. If you’re interested in this work, you can get an extended (though cryptic) summary here. If you want the full document, don't hesitate to contact me.

I previously got a Master in Political Science and a Bachelor in Philosophy and Literature. But I guess the easiest way to introduce myself professionally is to let you see my resume, is it not?