Mouse has 2 biological dads; GPT4 saved this dog; Bing writes winning caption for New Yorker's cartoon; AI creates 3D scene from 1 pic
Développement de l'IA : ces 4 clans s'opposent !
Bonjour,
Vous recevez la newsletter Parlons Futur : une fois par semaine au plus, une sélection de news résumées en bullet points sur des sujets tech 🤖, science 🔬, éco 💰, géopolitique 🌏 et défense ⚔️ pour mieux appréhender le futur 🔮.
Je m'appelle Thomas, plus d'infos sur moi en bas d'email.
Voici donc ma dernière sélection !
L’apéro
AI is already taking video game illustrators’ jobs in China (source)
“AI is developing at a speed way beyond our imagination. Two people could potentially do the work that used to be done by 10.”
AI-generated art was so skilled that some illustrators talked about giving up drawing altogether. “Our way of making a living is suddenly destroyed,” said a game artist in Guangdong
36% of researchers believe that AI could cause a "nuclear-level catastrophe."
According to a survey conducted by Stanford University's Institute for Human-Centered AI
57% of researchers, for example, think that "recent research progress" is paving the way for artificial general intelligence.
AI Could Enable Humans to Work 4 Days a Week, Says Nobel Prize-Winning Economist (Time)
“We could increase our well-being generally from work and we could take off more leisure."
says Christopher Pissarides, who specializes in the impact of automation on work
Image generation : MidJourney, a self-funded 11-person team, does better than Adobe, see these examples from same prompt
one reason seems to be that Adobe Firefly is only trained on images that has Creative Commons and its own stock photos, so not good at generating copyrighted characters
but Adobe Firefly does worse even on examples with no copyrighted character
Samsung Workers Accidentally Leaked Trade Secrets via ChatGPT (source)
ChatGPT’s data policy says it uses data to train its models unless you request to opt out. In ChatGPT’s usage guide, it explicitly warns users not to share sensitive information in conversations.
One employee pasted confidential source code into the chat to check for errors.
Another shared a recording of a meeting to convert into notes for a presentation.
Demo of a prototype that tells you in real time what to say during a job interview
Télétravail : le télé travail, plébiscité par les travailleurs, ça ne marche pas, sauf pour les développeurs. C'est en tout cas l'enseignement de l'étude de Time Ltd , que Petitweb révèle ici: la moitié d'une journée de télé travail, c'est comme un congé. Cette étude s'appuie sur des données de dizaines de milliers de salariés de grandes entreprises. Il faudrait revenir à quatre jours au bureau, a minima.
More cool pics about the pope generated by AI
Mice With Two Dads Were Born From Eggs (ovules) Made From Male Skin Cells (source)
Thanks to iPSC (induced pluripotent stem cell) technology, scientists have been able to bypass nature to engineer functional eggs, reconstruct artificial ovaries, and give rise to healthy mice from two mothers. Yet no one has been able to crack the recipe of healthy offspring born from two dads until now.
In a study published in Nature, researchers described how they scraped skin cells from the tails of male mice and used them to create functional egg cells. When fertilized with sperm and transplanted into a surrogate, the embryos gave rise to healthy pups, which grew up and had babies of their own.
but…the success rate in mice was very low at just a snippet over 1%, for now…
How Mars Rock Samples Would Make Their Way to Earth (more info here on NASA's website)
Vinod Khosla (cofounder of Sun Microsystems in 1982 and cleantech fund Khosla Ventures in 2004) on AI (source)
"In 25 years, 80% of all jobs will be capable of being done by an AI." "AI will ‘free humanity from the need to work’"
Wow : a computer model that can create realistic 3D pictures from just one photo, and it can show the same scene from different angles or even make a 3D video. (see examples in video here)
#GPT4 saved this dog's life. Read the story on Twitter
And at the same time : When Dean Buonomano, a neuroscientist at U.C.L.A., asked GPT-4 “What is the third word of this sentence?,” the answer was “third.”
These examples may seem trivial, but the cognitive scientist Gary Marcus wrote on Twitter that “I cannot imagine how we are supposed to achieve ethical and safety ‘alignment’ with a system that cannot understand the word ‘third’ even [with] billions of training examples.”
Bing AI (powered in part by GPT-4) invented one of the 3 winning captions at one New Yorker's cartoon caption contest. (source)
Bing AI wrote in response to a prompt describing the image and asking to come up with a funny abstract caption
the caption: “They’re not looking for the exit, they’re looking for meaning”
Bill Gates after trying a self-driving car in London: "We’ve made tremendous progress on autonomous vehicles, or AVs, in recent years, and I believe we’ll reach a tipping point within the next decade"
"The car drove us around downtown London, which is one of the most challenging driving environments imaginable, and it was a bit surreal to be in the car as it dodged all the traffic. (Since the car is still in development, we had a safety driver in the car just in case, and she assumed control several times.)"
"a vehicle made by the British company Wayve, which has a fairly novel approach. While a lot of AVs can only navigate on streets that have been loaded into their system, the Wayve vehicle operates more like a person. It can drive anywhere a human can drive."
His article, see the 2-min video of the ride on Youtube
Tech joke:“how many people work at Google? Oh, about half”
Partagez cette newsletter par Whatsapp en cliquant ici ❤️
Si vous appréciez cette synthèse gratuite, n’hésitez pas à prendre 3 secondes svp pour l’envoyer à ne serait-ce qu’un contact par Whatsapp en cliquant ici 🙂
Et si on vous a partagé cet email, vous pouvez cliquer ici pour vous inscrire et ne pas manquer les prochains
À table !
Mes derniers podcasts
Approximations et omissions dans le dernier C dans l’Air sur l’IA (Google, Apple, autre, ou cherchez Parlons Futur dans votre appli de podcast préférée et retrouvez le titre)
Pardon : j’ai diffusé des extraits de C dans l'Air que je commente ensuite, les extraits sont difficilement audibles malheureusement, mais il suffit de les passer, car je les résume avant de les commenter, désolé, je m'y prendrai mieux la prochaine fois.
Plein de ressources utiles abordées pendant l'épisode à retrouver dans sa description
Après ChatGPT, dites bonjour aux Action Engines (Google, Apple, Podbean)
Après le texte et ChatGPT, voici comment la vidéo fera passer un nouveau cap à l’IA (Google, Apple, Podbean)
Une super IA signifie-t-elle la fin de l’humanité ? Résumé de la thèse d’Eliezer Yudkowsky (Google, Apple, Podbean)
Conversation avec Djoann Fal, à peine 30 ans, originaire d'une petite ville du Var, il a déjà connu le succès avec sa startup en Asie du Sud Est, a été choisi par Forbes dans son classement Asie "30 under 30" en 2018 entre autres distinctions, et est aujourd'hui à la tête d'un fonds d'investissement dans l'économie verte en Asie. Il nous raconte son parcours !
Voir les sujets abordés dans la description du podcast ici
Google, Apple, Podbean, ou cherchez "French Connection : conversations avec des entrepreneurs du monde" dans votre appli de podcast préférée, c'est le dernier épisode
Résumé de l'entretien donné par Yann LeCun sur France Inter (vidéo de 30 min sur Youtube)
ChatGPT, c'est "de la bonne ingénierie" mais "pas révolutionnaire"
Le plus grand déploiement d'IA moderne de nos jours se trouve dans la modération des contenus sur les réseaux sociaux qui a beaucoup progressé ces dernières années
L'IA finira par dépasser l'intelligence humaine, à tous les niveaux, "c'est évident"
L'IA va détruire des emplois, mais plein d'autres seront créés, "regardez les youtubeurs et les influenceurs"...
Mon commentaire: mmm, si l'IA se met à détruire bcp d'emplois de bureau, pourra-t-on vraiment tous devenir influenceurs ? En 2013 par exemple "47 of top 50 most common jobs in the US, accounting overall for roughly half of total workers in the US, had been around for 60 years or more." Ce qu'on constate pour l'instant c'est que lorsqu'un business model est disrupté, beaucoup d'emplois qui y étaient liés sont détruits, de nouveaux emplois apparaissent dans les startups disruptives mais beaucoup moins que les emplois détruits, et la différence ne se retrouve que très peu finalement dans l'accroissement du chômage mais plutôt dans un transfert vers : i. d'autres secteurs pas encore ou peu disruptés et où la productivité au sens de production/heure travaillée est bien plus faible que dans les nouvelles entreprises disruptives ; ou ii. des emplois liés à la tech mais difficiles et peu payés comme les employés d'entrepôt e-commerce, les chauffeurs et livreurs des plateformes, etc.). (source)
Il ne faut pas avoir peur de l'IA, ni même des super IAs, elles viendront amplifier notre propre intelligence, nous aider à gérer mieux que nous les démarches du quotidien ennuyantes, et à être plus créatifs, à discuter en temps réel avec d'autres humains ne parlant pas notre langue, etc.
S'il faut bien sûr réguler l'IA dans les domaines d'application où des vies sont clairement en jeu, comme les voitures autonomes et les diagnostics médicaux, par contre, faire une pause dans la recherche en IA n'a pas de sens. (voir article plus bas pour un résumé des positions du secteur sur le sujet). Certes l'IA est disruptive, mais comme d'autres techniques avant elle, à l'instar de l'imprimerie. La meilleure façon de parer aux problèmes engendrés par l'IA est justement de continuer la recherche en IA.
L'invention dont il est le plus fier : les réseaux convolutifs qui permettent à la machine de "voir", qu'il a mis au point à la fin des années 80, utilisés au départ dans la reconnaissance des caractères dans le traitement des chèques par exemple. Des systèmes qu'on n'a fait qu'amplifier et généraliser à d'autres domaines jusqu'à aujourd'hui grâce à des entraînements sur des corpus de données toujours plus grands et de la puissance de calcul toujours plus massive et meilleure marché.
Le dernier Large Language Model rendu public par Meta, LLaMa, est le fruit du travail de 14 chercheurs, dont 10 Français basés au bureau FAIR (Facebook AI Research) de Paris ouvert par Yann LeCun
à ce sujet, The Economist disait en mars dernier : Meta, in particular, is gaining a reputation for being less tight-lipped about its work than fellow tech giants. Its AI-software library, called PyTorch, has been available to anyone for a while; since February researchers can freely use its large language model, LLaMa, the details of whose training and biases are also public. All this, says Joelle Pineau, who heads Meta’s open-research programme, helps it attract the brightest minds (who often make their move to the private sector conditional on a continued ability to share the fruits of their labours with the world).
How Will AI Create or Destroy Jobs? It depends mostly on demand saturation (source)
Automation arrives and increases productivity by eliminating some human tasks (=lower labor costs).
This reduces the cost of the product.
If there is competition, this increases quality and reduces prices.
Demand soars.
This increase in demand creates so much more work that companies need to hire people to do the tasks that are not yet automated.
But at some point, this improvement in quality and price saturates demand. People don’t want more products. They stop buying much more.
But productivity keeps improving. More and more tasks are automated, but the volume of sales doesn’t grow as much.
Employment drops.
This explains why lots of agricultural jobs disappeared—and the horses: People prioritize food. If you increase productivity, you’ll produce more food, and people will buy a bit more if they just couldn’t afford it before. But once their hunger is covered, they don’t want more quantity. They want more quality. This can only create so much more work. You won’t buy 30 avocados because they’re 30 times cheaper.
Pétition pour freiner les travaux en IA : comment se positionnent les acteurs du secteur ?
La pétition en question qui s'interroge "Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?"
considérant ainsi des risques de court terme, moyen terme et long terme.
Et elle demande : "we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4"
Les 4 grandes positions autour de cette pétition, pour simplifier
1. Ceux qui ont signé cette pétition ou qui soutiennent les mêmes positions sans le dire ouvertement, s'inquiétant des risques de court, moyen et long terme
Parmi les signataires, notamment : Yoshua Bengio, un des 3 parrains du deeplearning et le seul à ne pas avoir rejoint le privé, il écrivait récemment :
There is no guarantee that someone in the foreseeable future won’t develop dangerous autonomous AI systems with behaviors that deviate from human goals and values. The short and medium term risks –manipulation of public opinion for political purposes, especially through disinformation– are easy to predict, unlike the longer term risks –AI systems that are harmful despite the programmers’ objectives,– and I think it is important to study both.
Parmi ceux qui n'ont pas signé mais qui ont exprimé des craintes similaires
Geoffrey Hinton, l'aîné des 3 parrains du deeplearning, chez Google Brain. S'il n'a pas signé la pétition, il a tout de même expliqué dans une récente interview :
"Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI," Hinton said. "And now I think it may be 20 years or less."
As for the odds of AI trying to wipe out humanity? "It's not inconcievable, that's all I'll say," Hinton said.
The bigger issue, he said, is that people need to learn to manage a technology that could give a handful of companies or governments an incredible amount of power.
"I think it's very reasonable for people to be worrying about these issues now, even though it's not going to happen in the next year or two," Hinton said. "People should be thinking about those issues."
Demis Hassabis, CEO de Deepmind, l'autre labo de pointe sur l'IA, dans le Time magazine in Jan 2023 DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution
2. Ceux qui ont refusé de signer cette pétition car elle n'insiste pas assez sur les risques de court terme (en anglais: fairness, algorithmic bias, accountability, privacy, transparency, inequality, cost to the environment, disinformation/accuracy, cybercrime) et trop sur les risques de long terme relevant de la science fiction
notamment les chercheuses en IA et sciences du langage Emily M. Bender (lire sa réaction), Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell
autres risques de court terme évoqués ds cet article du NYT In A.I. Race, Microsoft and Google Choose Speed Over Caution :
it could hurt users who become emotionally attached to them,
People could believe chatbots, which often use an “I” and emojis, are human.
could enable “tech-facilitated violence” through mass harassment online
Pour info: in 2020, Google blocked its top ethical A.I. researchers, Timnit Gebru and Margaret Mitchell, from publishing a paper warning that so-called large language models used in the new A.I. systems, which are trained to recognize patterns from vast amounts of data, could spew abusive or discriminatory language.
3. Ceux qui refusent de signer car ils considèrent que c'est en développant l'IA qu'on trouvera aussi les solutions à ses problèmes, et que les risques encourus ne sont pas encore assez élevés alors que les bénéfices sont déjà immenses
4. Enfin les plus radicaux comme Eliezer Yudkowsky qui refusent de signer cette pétition car ils pensent qu'elle ne va pas assez loin et minimise le risque existentiel.
Eliezer est le pionnier de “l’AI alignment research”
Eliezer Yudkowsky : "Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” (tribune dans le magazine Time Pausing AI Developments Isn't Enough. We Need to Shut it All Down, écouter aussi mon podcast sur le sujet Google, Apple, Podbean)
Le New York Times note : Mr. Yudkowsky and his writings played key roles in the creation of both OpenAI and DeepMind, another lab intent on building artificial general intelligence. He also helped spawn the vast online community of rationalists and effective altruists who are convinced that A.I. is an existential risk. This surprisingly influential group is represented by researchers inside many of the top A.I. labs, including OpenAI. They don’t see this as hypocrisy: Many of them believe that because they understand the dangers clearer than anyone else, they are in the best position to build this technology.
Sur le risque existentiel : In 2022, over 700 top academics and researchers behind the leading artificial intelligence companies were asked in a survey about future A.I. risk. Half of those surveyed stated that there was a 10 percent or greater chance of human extinction (or similarly permanent and severe disempowerment) from future A.I. systems.
On peut aussi citer le camp OpenAI lui-même, assez ambivalent sur le sujet :
La direction d'OpenAI elle-même se montre prudente : OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models."
Sam Altman, CEO d'OpenAI :
in a recent interview to ABC News
Altman is “a bit scared”; he thinks society needs to be given time to adapt; they will adjust the technology as negative things occur; and we’ve got to learn as much as we can whilst the “stakes are still low”
"AI will destroy millions of jobs, but humanity will create better ones"
in a 2019 NYT interview : While he told the newspaper he thought AGI could bring a huge amount of wealth to the people, he also admitted that it may end up ushering in the apocalypse.
in a more recent interview : "The hype over these systems — even if everything we hope for is right long term — is totally out of control for the short term," the CEO told the NYT in a more recent interview, adding that we have enough time to get ahead of these problems.
His grand idea is that OpenAI will capture much of the world’s wealth through the creation of A.G.I. and then redistribute this wealth to the people through a universal basic income. (...) he tossed out several figures — $100 billion, $1 trillion, $100 trillion.
Sam Altman forecasts that within a few years, there will be a wide range of different AI models propagating and leapfrogging each other all around the world, each with its own smarts and capabilities, and each trained to fit a different moral code and viewpoint by companies racing to get product out of the door. "The only way I know how to solve a problem like this is iterating our way through it, learning early and limiting the number of 'one-shot-to-get-it-right scenarios' that we have," said Altman.
Sam Altman, the C.E.O. of OpenAI, recently told ABC News, “I’m particularly worried that these models could be used for large-scale disinformation.” And, he noted, “Now that they’re getting better at writing computer code, [they] could be used for offensive cyberattacks.” He added that “there will be other people who don’t put some of the safety limits that we put on,” and that society “has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it.”
Et enfin "Nobody is launching runs bigger than GPT-4 for 6-9 months anyway. Why? Because it needs new hardware (H100/TPU-v5 clusters) anyway to get scale above that which are.. 6-9 months away after being installed, burned in, optimised etc" says Emad Mostaque, CEO at Stability AI, the open source generative AI company behind Stable Diffusion, a famous text-to-image AI model
Les dernières newsletters :
L’addition ?
Cette newsletter est gratuite, si vous souhaitez m'encourager à continuer ce modeste travail de curation et de synthèse, vous pouvez prendre quelques secondes pour :
transférer cet email à un(e) ami(e) ou partager par whatsapp
étoiler cet email dans votre boîte mail
cliquer sur le coeur en bas d’email
Un grand merci d'avance ! 🙏
Ici pour s’inscrire et recevoir les prochains emails si on vous a transféré celui-ci.
Quelques mots sur le cuistot
J'ai écrit plus de 50 articles ces dernières années, à retrouver ici, dont une bonne partie publiés dans des médias comme le Journal du Net (mes chroniques ici), le Huffington Post, L'Express, Les Échos.
Je suis CEO et co-fondateur de l'agence digitale KRDS, nous avons des bureaux dans 6 pays entre la France et l'Asie. Je suis basé à Singapour (mon Linkedin, mon Twitter), également membre du think tank NXU.
Merci, et bon weekend !
Thomas