My Wishlist

Intelligence is programmed to know sensory phenomena that are necessarily immersed in space and time. As for phenomena, their main dimensions space, time, causality, etc. These are forms of the transcendental subject and not intrinsic characteristics of reality. Descartes, Spinoza, Leibniz, the English and French Enlightenment, and Kant accomplished a great deal in two centuries, and paved the way for the modern philosophy of the nineteenth and twentieth centuries. A new form of reflexive knowledge grew, spread, and fragmented into the human sciences, which mushroomed with the end of the monopoly of theosophy.

As this dispersion occurred, great philosophers attempted to grasp reflexive knowledge in its unity. The reflexive knowledge of the scientific era neither suppressed nor abolished reflexive knowledge of the theosophical type, but it opened up a new domain of legitimacy of knowledge, freed of the ideal of divine knowledge. This de jure separation did not prevent de facto unions, since there was no lack of religious scholars or scholarly believers. Modern scientists could be believers or non-believers. Their position in relation to the divinity was only a matter of motivation.

Believers loved science because it revealed the glory of the divinity, and non-believers loved it because it explained the world without God. But neither of them used as arguments what now belonged only to their private convictions. In the human sciences, there were systematic explorations of the determinations of human existence. And since we are thinking beings, the determinations of our existence are also those of our thought. How do the technical, historical, economic, social and political conditions in which we live form, deform and set limits on our knowledge?

What are the structures of our biology, our language, our symbolic systems, our communicative interactions, our psychology and our processes of subjectivation? Modern thought, with its scientific and critical ideal, constantly searches for the conditions and limits imposed on it, particularly those that are as yet unknown to it, that remain in the shadows of its consciousness.

I will now broadly outline the figure of the transcendental subject of the scientific era, a figure that re-examines and at the same time transforms the three complementary aspects of the agent intellect. An evolving transcendental subject emerges from this reflexive cycle in which the living intelligence contemplates its own image in the form of a scientifically intelligible intelligence. Scientific investigation here is the internal mirror of the transcendental subjectivity, the mediation through which the living intelligence observes itself.

It is obviously impossible to confuse the living intelligence and its scientifically intelligible image, any more than one can confuse the map and the territory, or the experience and its description.

Collective intelligence | Pierre Levy's Blog | Page 2

Nor can one confuse the mirror scientific investigation with the being reflected in it the living intelligence , nor with the image that appears in the mirror the intelligible intelligence. These three aspects together form a dynamic unit that would collapse if one of them were eliminated. While the living intelligence would continue to exist without a mirror or scientific image, it would be very much diminished. It would have lost its capacity to reflect from a universal perspective.

The creative paradox of the intellectual reflexivity of the scientific age may be formulated as follows. It is clear, first of all, that the living intelligence is truly transformed by scientific investigation, since the living intelligence that knows its image through a certain scientific investigation is not the same does not have the same experience as the one that does not know it, or that knows another image, the result of another scientific investigation. But it is just as clear, by definition, that the living intelligence reflects itself in the intelligible image presented to it through scientific knowledge.

In other words, the living intelligence is equally dependent on the scientific and critical investigation that produces the intelligible image in which it is reflected. When we observe our physical appearance in a mirror, the image in the mirror in no way changes our physical appearance, only the mental representation we have of it. However, the living intelligence cannot discover its intelligible image without including the reflexive process itself in its experience, and without at the same time being changed.

In short, a critical science that explores the limits and determinations of the knowing subject does not only reflect knowledge—it increases it. Thus the modern transcendental subject is—by its very nature—evolutionary, participating in a dynamic of growth. In line with this evolutionary view of the scientific age, which contrasts with the fixity of the previous age, the collectivity that possesses reflexive knowledge is no longer a theosophical hierarchy oriented toward the agent intellect but a republic of letters oriented toward the augmentation of human knowledge, a scientific community that is expanding demographically and is organized into academies, learned societies and universities.

While the agent intellect looked out over a cosmos emanating from eternity, in analog resonance with the human microcosm, the transcendental subject explores a universe infinitely open to scientific investigation, technical mastery and political liberation. Reflexive knowledge has, in fact, always been informed by some technology, since it cannot be exercised without symbolic tools and thus the media that support those tools.

But the next age of reflexive knowledge can properly be called technological because the technical augmentation of cognition is explicitly at the centre of its project. Technology now enters the loop of reflexive consciousness as the agent of the acceleration of its own augmentation. This last point was no doubt glimpsed by a few pre—twentieth century philosophers, such as Condorcet in the eighteenth century, in his posthumous book of , Sketch for a Historical Picture of the Progress of the Human Mind.

But the truly technological dimension of reflexive knowledge really began to be thought about fully only in the twentieth century, with Pierre Teilhard de Chardin, Norbert Wiener and Marshall McLuhan, to whom we should also add the modest genius Douglas Engelbart. The regulating ideal of the reflexive knowledge of the theosophical age was the agent intellect, and that of the scientific-critical age was the transcendental subject.

It also inherits its power to be reflected in finite intelligences. But, in contrast with the agent intellect, instead of descending from eternity, it emerges from the multitude of human actions immersed in space and time. Like the transcendental subject, algorithmic intelligence is rational, critical, scientific, purely human, evolutionary and always in a state of learning.

But the vocation of the transcendental subject was to reflexively contain the human universe.

However, the human universe no longer has a recognizable face. The labyrinth of philosophies, methodologies, theories and data from the human sciences has become inextricably complicated. The transcendental subject has not only been dissolved in symbolic structures or anonymous complex systems, it is also fragmented in the broken mirror of the disciplines of the human sciences. It is obvious that the technical medium of a new figure of reflexive knowledge will be the Internet, and more generally, computer science and ubiquitous communication.

But how can symbol-manipulating automata be used on a large scale not only to reunify our reflexive knowledge but also to increase the clarity, precision and breadth of the teeming diversity enveloped by our knowledge? The missing link is not only technical, but also scientific.

Tous les cours

We need a science that grasps the new possibilities offered by technology in order to give collective intelligence the means to reflect itself, thus inaugurating a new form of subjectivity. As the groundwork of this new science—which I call computational semantics—IEML makes use of the self-reflexive capacity of language without excluding any of its functions, whether they be narrative, logical, pragmatic or other.

Computational semantics produces a scientific image of collective intelligence: Scientific change will generate a phenomenological change, [28] since ubiquitous multimedia interaction with a holographic image of collective intelligence will reorganize the human sensorium. The last, but not the least, change: The community that possessed the previous figure of reflexive knowledge was a scientific community that was still distinct from society as a whole. But in the new figure of knowledge, reflexive collective intelligence emerges from any human group.

Like the previous figures—theosophical and scientific—of reflexive knowledge, algorithmic intelligence is organized in three interdependent aspects. In short, in the emergent unity of algorithmic intelligence, computational semantics calculates the cognitive simulation that augments and reflects the collective intelligence of the coming civilization. At the time when the book was being written, the Web still existed only in the mind of Tim Berners-Lee. Computation, Cognition and Information Economy London: I responded at length in The Semantic Sphere to the prejudices of extremist post-modernism against scientific universality.

But the only thing that Esperanto and IEML have in common is the fact that they are artificial languages. They have neither the same form nor the same purpose, nor the same use, which invalidates criticisms of IEML based on the criticism of Esperanto. Le passage du Nord-Ouest Paris: Cosimo Classic, original in Arabic from the twelfth century. Yale University Press, original in Arabic from the twelfth century.

Davidson, Alfarabi, Avicenna, and Averroes, on Intellect. Oxford University Press, Gallimard, , 4 vol. Albert le Grand Paris: Since then, he has been working on a major undertaking: IEML, which already has its own grammar , is a metalanguage that includes the semantic dimension, making it computable. This in turn allows a reflexive representation of collective intelligence processes.

Jean-Louis LE MOIGNE, BIO-BIBLIOGRAPHIE

In the book Semantic Sphere I. A system for encoding meaning that adds transparency, interoperability and computability to the operations that take place in digital memory. By formalising meaning, this metalanguage adds a human dimension to the analysis and exploitation of the data deluge that is the backdrop of our lives in the digital society.

And it also offers a new standard for the human sciences with the potential to accommodate maximum diversity and interoperability. And that they create a new space of collaboratively produced, dynamic, quantitative knowledge. What are the characteristics of this augmented collective intelligence? The first thing to understand is that collective intelligence already exists. It is not something that has to be built. Collective intelligence exists at the level of animal societies: In addition to the means of communication used by animals, human beings also use language, technology, complex social institutions and so on, which, taken together, create culture.

Bees have collective intelligence but without this cultural dimension. In addition, human beings have personal reflexive intelligence that augments the capacity of global collective intelligence. This is not true for animals but only for humans. Now the point is to augment human collective intelligence. The main way to achieve this is by means of media and symbolic systems.

Human collective intelligence is based on language and technology and we can act on these in order to augment it. The first leap forward in the augmentation of human collective intelligence was the invention of writing. Then we invented more complex, subtle and efficient media like paper, the alphabet and positional systems to represent numbers using ten numerals including zero.

All of these things led to a considerable increase in collective intelligence. Then there was the invention of the printing press and electronic media. Now we are in a new stage of the augmentation of human collective intelligence: Our new technical structure has given us ubiquitous communication, interconnection of information, and — most importantly — automata that are able to transform symbols. With these three elements we have an extraordinary opportunity to augment human collective intelligence.

Download links

You have suggested that there are three stages in the progress of the algorithmic medium prior to the semantic sphere: This externalisation of the collective human memory and intellectual processes has increased individual autonomy and the self-organisation of human communities. How has this led to a global, hypermediated public sphere and to the democratisation of knowledge? This democratisation of knowledge is already happening.

If you have ubiquitous communication, it means that you have access to any kind of information almost for free: We can also speak about blogs, social media, and the growing open data movement. When you have access to all this information, when you can participate in social networks that support collaborative learning, and when you have algorithms at your fingertips that can help you to do a lot of things, there is a genuine augmentation of collective human intelligence, an augmentation that implies the democratisation of knowledge.

Cultural Institutions are publishing data in an open way; they are participating in broad conversations on social media, taking advantage of the possibilities of crowdsourcing, and so on. They also have the opportunity to grow an open, bottom-up knowledge management strategy.

Our species is producing and storing data in volumes that surpass our powers of perception and analysis. How is this phenomenon connected to the algorithmic medium? It was always there. It is just that we now have more data and more people are able to get this data and analyse it.

There has been a huge increase in the amount of information generated in the period from the second half of the twentieth century to the beginning of the twenty-first century. At the beginning only a few people used the Internet and now almost the half of human population is connected. At first the Internet was a way to send and receive messages. We were happy because we could send messages to the whole planet and receive messages from the entire planet. But the biggest potential of the algorithmic medium is not the transmission of information: We could say that the big data available on the Internet is currently analysed, transformed and exploited by big governments, big scientific laboratories and big corporations.

In the future there will be a democratisation of the processing of big data. It will be a new revolution. If you think about the situation of computers in the early days, only big companies, big governments and big laboratories had access to computing power. But nowadays we have the revolution of social computing and decentralized communication by means of the Internet. I look forward to the same kind of revolution regarding the processing and analysis of big data. Communications giants like Google and Facebook are promoting the use of artificial intelligence to exploit and analyse data.

This means that logic and computing tend to prevail in the way we understand reality. IEML, however, incorporates the semantic dimension. How will this new model be able to describe they way we create and transform meaning, and make it computable? It is based on logical links between data and on algebraic models of logic.

There is no model of semantics there. So in fact there is currently no model that sets out to automate the creation of semantic links in a general and universal way. We have very powerful tools at our disposal, we have enormous, almost unlimited computing power, and we have a medium were the communication is ubiquitous. You can communicate everywhere, all the time, and all documents are interconnected. Now the question is: This is why I have invented a language that automatically computes internal semantic relations.

When you write a sentence in IEML it automatically creates the semantic network between the words in the sentence, and shows the semantic networks between the words in the dictionary. When you write a text in IEML, it creates the semantic relations between the different sentences that make up the text. Moreover, when you select a text, IEML automatically creates the semantic relations between this text and the other texts in a library.

So you have a kind of automatic semantic hypertextualisation. Plus, IEML self-translates automatically into natural languages, so that users will not be obliged to learn this code. The most important thing is that if you categorize data in IEML it will automatically create a network of semantic relations between the data. You can have automatically-generated semantic relations inside any kind of data set.

So IEML provides a system of computable metadata that makes it possible to automate semantic relationships. Do you think it could become a new common language for human sciences and contribute to their renewal and future development? Everyone will be able to categorise data however they want. Any discipline, any culture, any theory will be able to categorise data in its own way, to allow diversity, using a single metalanguage, to ensure interoperability.

This will automatically generate ecosystems of ideas that will be navigable with all their semantic relations. You will be able to compare different ecosystems of ideas according to their data and the different ways of categorising them. You will be able to chose different perspectives and approaches. For example, the same people interpreting different sets of data, or different people interpreting the same set of data. IEML ensures the interoperability of all ecosystem of ideas.

On one hand you have the greatest possibility of diversity, and on the other you have computability and semantic interoperability. I think that it will be a big improvement for the human sciences because today the human sciences can use statistics, but it is a purely quantitative method. They can also use automatic reasoning, but it is a purely logical method. But with IEML we can compute using semantic relations, and it is only through semantics in conjunction with logic and statistics that we can understand what is happening in the human realm.

We will be able to analyse and manipulate meaning, and there lies the essence of the human sciences. Is still too early; perhaps the first application may be a kind of collective intelligence game in which people will work together to build the best ecosystem of ideas for their own goals.

I published The Semantic Sphere in And I finished the grammar that has all the mathematical and algorithmic dimensions six months ago. I am writing a second book entitled Algorithmic Intelligence , where I explain all these things about reflexivity and intelligence. The IEML dictionary will be published online in the coming months. It will be the first kernel, because the dictionary has to be augmented progressively, and not just by me.

I hope other people will contribute. This IEML interlinguistic dictionary ensures that semantic networks can be translated from one natural language to another. Could you explain how it works, and how it incorporates the complexity and pragmatics of natural languages? The basis of IEML is a simple commutative algebra a regular language that makes it computable. A special coding of the algebra called Script allows for recursivity, self-referential processes and the programming of rhizomatic graphs. The algorithmic grammar transforms the code into fractally complex networks that represent the semantic structure of texts.

The dictionary, made up of terms organized according to symmetric systems of relations paradigms , gives content to the rhizomatic graphs and creates a kind of common coordinate system of ideas. Working together, the Script, the algorithmic grammar and the dictionary create a symmetric correspondence between individual algebraic operations and different semantic networks expressed in natural languages. The semantic sphere brings together all possible texts in the language, translated into natural languages, including the semantic relations between all the texts.

On the playing field of the semantic sphere, dialogue, intersubjectivity and pragmatic complexity arise, and open games allow free regulation of the categorisation and the evaluation of data. Ultimately, all kinds of ecosystems of ideas — representing collective cognitive processes — will be cultivated in an interoperable environment. Since IEML automatically creates very complex graphs of semantic relations, one of the development tasks that is still pending is to transform these complex graphs into visualisations that make them usable and navigable. How do you envisage these big graphs?

Can you give us an idea of what the visualisation could look like? The idea is to project these very complex graphs onto a 3D interactive structure. These could be spheres, for example, so you will be able to go inside the sphere corresponding to one particular idea and you will have all the other ideas of its ecosystem around you, arranged according to the different semantic relations. You will be also able to manipulate the spheres from the outside and look at them as if they were on a geographical map.

And you will be able to zoom in and zoom out of fractal levels of complexity. Ecosystems of ideas will be displayed as interactive holograms in virtual reality on the Web through tablets and as augmented reality experienced in the 3D physical world through Google glasses, for example. There are social concerns about possible abuses and privacy infringement.

Some big companies are starting to consider drafting codes of ethics to regulate and prevent the abuse of data. Do you think a fixed set of rules can effectively regulate the changing environment of the algorithmic medium? How can IEML contribute to improving the transparency and regulation of this medium? IEML does not only allow transparency, it allows symmetrical transparency. Everybody participating in the semantic sphere will be transparent to others, but all the others will also be transparent to him or her. The problem with hyper-surveillance is that transparency is currently not symmetrical.

What I mean is that ordinary people are transparent to big governments and big companies, but these big companies and big governments are not transparent to ordinary people. There is no symmetry. Power differences between big governments and little governments or between big companies and individuals will probably continue to exist.

But we can create a new public space where this asymmetry is suspended, and where powerful players are treated exactly like ordinary players. And to finish up, last month the CCCB Lab held began a series of workshops related to the Internet Universe project, which explore the issue of education in the digital environment. People have to accept their personal and collective responsibility.


  1. The Space In-Between: Essays on Latin American Culture (Latin America in Translation).
  2. L'intelligence de la complexite (Collection Cognition et formation) (French Edition)?
  3. The Rebel Rancher (Mills & Boon Cherish) (Cadence Creek Cowboys, Book 2)?
  4. Fil d'Ariane;
  5. Professional Cheats!

So we have a great deal of responsibility for what happens online. Whatever is happening is the result of what all the people are doing together; the Internet is an expression of human collective intelligence. Therefore, we also have to develop critical thinking. Everything that you find on the Internet is the expression of particular points of view, that are neither neutral nor objective, but an expression of active subjectivities. Where does the money come from? Where do the ideas come from?

The more we know the answers to these questions, the greater the transparency of the source… and the more it can be trusted. This notion of making the source of information transparent is very close to the scientific mindset. Because scientific knowledge has to be able to answer questions such as: Where did the data come from? Where does the theory come from?

Where do the grants come from? Transparency is the new objectivity. Blog of Collective Intelligence since View original post 2, more words. His ideas on collective intelligence have been essential for the comprehension of some phenomena of contemporary communication, and his research on Information Economy Meta Language IEML is today one of the biggest promises of data processing and of knowledge management.

Collective intelligence can be defined as shared knowledge that exists everywhere, that is constantly measured, coordinated in real time, and that drives the effective mobilization of several skills. In this regard, it is understood that collective intelligence is not a quality exclusive to human beings. You are totally right when you say that collective intelligence is not exclusive to human race. We know that the ants, the bees, and in general all social animals have got collective intelligence. They solve problems together, and —as social animals-, they are not able to survive alone and this is also the case with human species; we are not able to survive alone and we solve problems together.

But there is a big difference that is related to the use of language: Animals are able to communicate, but they do not have language, I mean, they cannot ask questions, they cannot tell stories, they cannot have dialogues, they cannot communicate about their emotions, their fears, and so on. This ability to play with symbolic systems, to play with tools and to build complex social institutions, creates a much more powerful collective intelligence for the humans. Also, I would say that there are two important features that come from the human culture: The first is that human collective intelligence can improve during history, because each new generation can improve the symbolic systems, the technology, and the social institutions; so there is an evolution of human collective intelligence and, of course, we are talking about a cultural evolution, not a biological evolution.

And then, finally, and maybe the most important feature of human collective intelligence, is that each unit of the human collectivity has an ability to reflect, to think by itself. That is the main difference between human and animal collective intelligence. Do the writing and digital technologies also contribute to this difference? In the oral culture, there was certain kind of transmission of knowledge, but of course, when we invented the writing systems we were able to accumulate much more knowledge to transmit to the next generations.

With the invention of the diverse writing systems, and then their improvements -like the invention of the alphabet, the invention of the paper, the printing press, and then the electronic media- human collective intelligence expanded. So, for example, the ability to build libraries, to build scientific coordination and collaboration, the communication supported by the telephone, the radio, the television makes human collective intelligence more powerful, and I think that it will be the main challenge our generation and the next will have to face: In an interview conducted by Howard Rheingold, you mentioned that every device and technology that have the purpose of enhancing language also enhance collective intelligence and, at the same time, have an impact on cognitive skills such as memory, collaboration and the ability to connect with one another.

Taking this into account:. Maybe the most important sector where we should put particular effort is scientific research and learning, because we are talking about knowledge, so the most important part is the creation of knowledge, the dissemination of knowledge or, generally, the collective and individual learning.

Today there is a transformation of communication in the scientific community; more and more journals are open and online, people are doing virtual teams, they communicate by internet, people are using big amounts of digital data, and they are processing this data with computer power; so we are already witnessing this augmentation, but we are just at the beginning of this new approach. In the case of learning I think it is very important that we recognize the emergence of new ways of learning online collaboratively, where people who want to learn are helping each other, are communicating, are accumulating common memories from where they can take what is interesting for them.

This collective learning is not limited to schools; it happens in all kinds of social environments.

We have to realize that learning is and always has been an individual process at is core. Someone has to learn; you cannot learn for someone else. Help other people to learn, this is teaching; but the learner is doing the real work. Then, if the learners are helping each other, you have a process of collective learning. Of course, it works better if these people are interested in the same topics or if they are engaged in the same activities. Collective learning augmentation is something that is very general and that has increased with the online communication.

It also happens at the political level; there is an augmented deliberation, because people can discuss easily on the internet and also there is an enhanced coordination for public demonstrations and similar things. There is a process of artificialization of cognition in general that is very old; it began with the writing, with books; it is already a kind of externalization or objectification of memory.

I mean, a library, for instance, is something that is completely material, completely technical, and without libraries we would be much less intelligent. We cannot be against libraries because instead of being pure brain they are just paper, and ink, and buildings, and index cards. It is the same kind of reasoning than with the libraries, it is just another technology, more powerful, but it is the same idea. It is an augmentation of our cognitive ability -individual and collective-, so it is absurd to be afraid of it.

But we have to distinguish very clearly the material support and the texts. The texts come from our mind, but the text that is in my mind can be projected on paper as well as in a computer network. Et comment chroniciser ces phases? Certes, on pourrait, pour chaque organisation, travailler sur les mandatures des directions successives des organisations majeures. Robert Boure explique ailleurs a: Ainsi Robert Boure b: En accord avec Robert Boure, je pense que cela permet de traiter des pratiques apprises: Objets, savoirs, discipline, Grenoble, Presses universitaires de Grenoble.

Boure, la recherche sur J. Meyriat entreprise par V. Boure, on trouve Fr. Bernard, V Couzinet, J. Palermiti et Y Polity in: Boure ne dit pas ce qui, dans les travaux de J. On dira que J. On note que la perspective de J.