đ World's fastest shoes, machine reads mind at a distance, portez ce pull et Ă©chappez Ă l'IA, image-to-music & more
Amazing text-to-video, prompt battles, AI creates "drone" shots from your phone footage" & more
Bonjour,
Vous recevez la newsletter Parlons Futur : une fois par semaine au plus, une sĂ©lection de news rĂ©sumĂ©es en bullet points sur des sujets tech đ€, science đŹ, Ă©co đ°, gĂ©opolitique đ et dĂ©fense âïž pour mieux apprĂ©hender le futur đź.
Je m'appelle Thomas, plus d'infos sur moi en bas d'email.
Voici donc ma derniÚre sélection !
LâapĂ©ro
New fun AI tool: upload a source image, suggest a word, and the tool will update the image under the influence of the wordÂ
share a pic of a piece of watermelon, enter "lamp", and it will generate a lamp with an air of watermelon
Many more examples here
This sweater developed by the University of Maryland is an invisibility cloak against AI. It uses "adversarial patterns" to stop AI from recognising the person wearing it. See the AI getting confused in that 1-min video
AprÚs les Rap Battles, maintenant on a des soirées "Prompt Battle" : "Its like a rap battle but with keyboards and DALL-E access
Same fierce competition and vibrant energy as a real battle" (pour rappel DALL-E est un outil qui génÚre une image originale grùce à l'IA sur la base de qq mots de texte)
New AI tools to create original avatar pics are all the rage:
profilepicture.ai : Upload at least 10 photos, Our AI will start training for up to 3 hours, and for $34 Get more than hundred new profile pictures (Works for humans, cats and dogs, Full HD quality 2048x2048)
photoai.me:Â AI will create photos of you based on the pack you choose. Just upload ~10 photos of yourself (after you pay), our AI will train on those, and weâll deliver 10 new photos of you within 12-24hrs. Several packs at $25 each:Â Linkedin pack, Tinder pack, Celeb pack
Upload a few pics to this tool avatarai.me, and for $40, get 100 avatar pics in many different styles
Rewind: "the search engine for your life", a macOS app that enables you to find anything youâve seen, said, or heard. this tech is only possible now thanks to the latest Apple chip (la dĂ©mo vidĂ©o de 2 min)
And yet another promising AI tool : it creates key insights of a podcast episode with short AI-generated audio summaries, and lets you deep dive into the parts of the episode you find most interesting.
Dans le mĂȘme genre : assemblyai.com "automatically summarizes audio and video files at scale with AI" : par ex un audio de 30 minutes est synthĂ©tisĂ© Ă l'Ă©crit, au choix : 5 bullet points, un paragraphe, un titre ou 3 mots đ±
Je ne l'ai pas testĂ©, j'imagine qu'ils ne mentent pas effrontĂ©ment, mais mĂȘme si encore imparfait, imaginons juste dans 5 ans oĂč on en sera, quelle Ă©poque dingue !
ça paraßt inévitable : bientÎt il y aura des outils pour traduire un texte, un discours en temps réel, dans la langue de son choix, dans le style de son choix (soutenu, familier, en prose, en vers, en alexandrins why not, etc.), avec l'accent de son choix, résumé ou pas et si résumé, avec le degré de concision de son choix, avec le timbre de voix de la personne de son choix, prononcé par la représentation photo réaliste et tridimensionnelle de la personne de son choix, etc.
J'Ă©crivais dans cette tribune AthĂšnes 2.0, ou quelle place pour lâHomme dans un monde de machines ? dans le Journal du Net en 2013 : "On pourra ainsi faire dire ce que lâon veut Ă un Louis de FunĂšs en 3D plus vrai que nature, qui prononcera son texte comme il aurait pu le faire de son vivant."
This tool can generate music from an image. One example. Similar idea, someone on Twitter suggested "Put in the book, get a soundtrack that plays while you read it, inspired by the content."
Exemple d'utilisation de l'IA gĂ©nĂ©ratrice de texte GPT-3 dans une Google spreadsheet : l'IA prend le nom d'un invitĂ© en colonne 1, une anecdote Ă mentionner absolument en colonne 2 et gĂ©nĂšre le texte d'une "thank you card" originale en colonne 3 đ€šÂ (source avec d'autres exemples)
Cet autre outil transforme des vidéos prises avec votre smartphone au sol en une vidéo crédible des lieux qui aurait été prise du ciel depuis un drone "Now you can create "drone" shots from your phone footage" (la démo de 17 secondes)
L'analyste Benedict Evans sur le mĂ©taverse :Â
Making a device better does not necessarily make it universal. Most obviously, weâve been applying Mooreâs Law to games consoles for 40 years or so, and theyâve got a lot better but most people donât care.
A Playstation 5 is objectively amazing, but the global installed base of games consoles is flat at only about 175m units and it should now be clear that adding even better graphics- another decade of Mooreâs Law - isnât going to change that. Most people simply arenât interested in that kind of experience no matter how much Mooreâs Law Sony and Microsoft throw at them.
VR demos of industrial designers or heart surgeons looking at 3D models are cool, but most peopleâs work isn't in 3D either
We canât know whether the metaverse will gain mass adoption in advance. A lot of very clever people did not realise that mobile would replace PCs as the centre of tech, so check back in a decade to find out. But the test is that for VR and AR to matter, we need to do things where 3D matters, whereas mobile did not have to create mobile things.Â
Je dirais qu'a minima pour la communication cela deviendra irrésistible (ma synthÚse d'il y a 2 semaines suite aux annonces de Meta/Facebook)
Googleâs text-to-video AI is amazing : see the video it generated for "A happy elephant wearing a birthday hat walking under the sea"
Quand l'IA part dans un délire : "Salmon in a river" (source)
Ă votre bon coeur â€ïž
Si vous apprĂ©ciez cette synthĂšse gratuite, nâhĂ©sitez pas Ă prendre 3 secondes svp pour lâenvoyer Ă ne serait-ce quâun contact đ
Et si on vous a partagé cet email, vous pouvez cliquer ici pour vous inscrire et ne pas manquer les prochains
Ă table !
The Economist titre aujourdâhui "Goodbye 1.5°C" : Global warming cannot be limited to 1.5°C: The world is missing its lofty climate targets. Time for some realism
The world is already about 1.2°C hotter than it was in pre-industrial times. Given the lasting impact of greenhouse gases already emitted, and the impossibility of stopping emissions overnight, there is no way Earth can now avoid a temperature rise of more than 1.5°C.
There is still hope that the overshoot may not be too big, and may be only temporary, but even these consoling possibilities are becoming ever less likely.
Overshooting 1.5°C does not doom the planet. But it is a death sentence for some people, ways of life, ecosystems, even countries.
If the rich world allows global warming to ravage already fragile countries, it will inevitably end up paying a price in food shortages and proliferating refugees.
The world needs to be more pragmatic, and face up to some hard truths:
1: cutting emissions will require much more money. Global investment in clean energy needs to triple from todayâs $1trn a year, and be concentrated in developing countries, which generate most of todayâs emissions.
2: fossil fuels will not be abandoned overnight
3: 1.5°C will be missed, greater efforts must be made to adapt to climate change.
Fortunately a lot of adaptation is affordable. It can be as simple as providing farmers with hardier strains of crops and getting cyclone warnings to people in harmâs way.Â
This is an area where even modest help from rich countries can have a big impact. Yet they are not coughing up the money they have promised to help the poorest ones adapt.Â
That is unfair: why should poor farmers in Africa, who have done almost nothing to make the climate change, be abandoned to suffer as it does?
Finally, having admitted that the planet will grow dangerously hot, policymakers need to consider more radical ways to cool it. Technologies to suck carbon dioxide out of the atmosphere, now in their infancy, need a lot of attention. So does âsolar geoengineeringâ, which blocks out incoming sunlight. Both are mistrusted by climate activists, the first as a false promise, the second as a scary threat. On solar geoengineering people are right to worry. It could well be dangerous and would be very hard to govern. But so will an ever hotter world.
Tesla complĂštement Ă rebours du reste du secteur sur la conduite autonome
Sans parler du fait que leur techno est loin d'ĂȘtre fonctionnelle, un autre point saillant est que Tesla a longtemps expliquĂ© que parvenir Ă maĂźtriser la conduite autonome ne nĂ©cessiterait que la digestion de flux vidĂ©os (car nous-mĂȘmes nous n'utilisons que nos yeuxâŠbon et un peu nos oreilles), nul besoin en plus du LIDAR (laser imaging, detection, and ranging), une sorte de radar qui utilise le laser au lieu des ondes radio pour dĂ©tecter les obstacles. Tous les concurrents l'utilisent et sont trĂšs critiques de Tesla sur ce point.
Andrej Karpathy, ancien directeur de l'IA de Tesla (il en est parti récemment et est toujours trÚs respecté par Elon Musk), persiste et signe dans un podcast "Still no need for LIDAR, I will make a prediction, the other companies will drop it as well"
Tesla avait déjà retiré récemment de ses nouveaux modÚles les radars et capteurs ultrasons, Tesla mise donc tout plus que jamais sur les seuls flux vidéos.
Et lĂ oĂč beaucoup de concurrents essaient aussi de cartographier Ă grands frais toutes les routes du monde au centimĂštre prĂšs, Andrej Karpathy enfonce le clou : "Tesla will not premap environments to one centimeter accuracy, Tesla just uses Google maps level of accuracy. Doing so would be a crutch (une bĂ©quille), a distraction, it costs entropy and bloat, it would be a massive dependency, humans don't need it"
Difficile de savoir qui a raison, et Ă vrai dire, on semble encore trĂšs loin de la conduite autonome, voir lâarticle suivantâŠ
Weâre now deep in the autonomy winter.
comme explique l'analyste trech Benedict Evans :
The first wave of machine leaning, from 2013 onwards, made a lot of people think that this could actually work, and it certainly got us 90% of the way there, but it now seems fairly clear that having a car with no steering wheel that can drive across the country might be generations away, and will certainly take longer and cost far more than people hoped
à noter : Argo, the Ford/Volkswagen autonomy Joint Venture with a $1 billion investment is shutting down (Techcrunch, engadget).
Ford said that it made a strategic decision to shift its resources to developing advanced driver assistance systems, and not autonomous vehicle technology that can be applied to robotaxis.
"Profitable, fully autonomous vehicles at scale are a long way off and we wonât necessarily have to create that technology ourselves,â"
Ford CEO Jim Farley . "It's estimated that more than a hundred billion has been invested in the promise of level four autonomy," he said during the call, "And yet no one has defined a profitable business model at scale."
In short, Ford is refocusing its investments away from the longer-term goal of Level 4 autonomy (that's a vehicle capable of navigating without human intervention though manual control is still an option) for the more immediate short term gains in faster L2+ and L3 autonomy.Â
L2+ is today's state of the art, think Ford's BlueCruise or GM's SuperCruise technologies with hands-free driving along pre-mapped highway routes
L3 is where you get into the vehicle handling all safety-critical functions along those routes, not just steering and lane-keeping.Â
"Commercialization of L4 autonomy, at scale, is going to take much longer than we previously expected," Doug Field, chief advanced product development and technology officer at Ford, said during the call. "L2+ and L3 driver assist technologies have a larger addressable customer base, which will allow it to scale more quickly, and profitability."
The World's Fastest Shoes Promise to Increase Your Walking Speed up to 11km/h (source)
Developed by a team of robotics engineers who spun off their work at Carnegie Mellon University into a new company called Shift Robotics
You don't need to know how to roller skate with the Moonwalkers, you just walk.
A strap-on design allows the Moonwalkers to be used with almost any pair of shoes, and each unit features an electric motor that powers a set of wheels similar to what youâd find on a pair of inline roller skates, but much smaller, and not all in a single line, so thereâs no balancing required.
Sensors monitor the userâs walking gait (la dĂ©marche) while algorithms automatically adjust the power of the motors to match, synchronized between each foot, so the added speed increases and decreases as the user walks faster or slower.
battery-powered range of about 10 km
because theyâre much smaller than an electric scooter or a bike, theyâre easy to keep stashed at your desk or even in a backpack when not in use.
Full retail pricing for a pair of Moonwalkers is expected to be around $1,400.
Link to the Kickstarter campaign (they have raised almost 3X what they asked for)
Belle reconversion: French ex-Arsenal player Mathieu Flamini Has a Plan to Decarbonize the Chemical Industry (Wired)
He cofounded and is now CEO of GFBiochemicals whose main product is an obscure molecule called levulinic acid, that they spent a decade figuring out how to mass produce from agricultural waste products and that offers a âplant-basedâ alternative to oil-derived chemicals that could be used in thousands of products, from paints to cosmetics.
Levulinic acid is a building blockâa platform that can be tweaked and altered to suit the requirements of different industries. GFBiochemicals already has almost 200 patents for plant-based solvents, polyols, and plasticizersâall things that could replace substances extracted from fossil fuels, which have toxic or nonbiodegradable byproducts.
In July, the company opened a new factory near Flaminiâs hometown of Marseille in an old oil-refining area, and itâs working on deals with some large multinational chemical companies.
âWeâre allowing the replacement of those obsolete molecules, which are having a negative impact on the planet, with new molecules that reduce CO2 emissions and are biodegradable and nontoxic,â
âWe want to be the Intel of the chemical world,â Flamini says. âWe have a platform technology that allows us to go across industries, from personal care to home care to agriculture to paints and coatings.
J'ai du mal à y croire, ça paraßt dingue : des chercheurs parviennent à lire dans les pensées sans méthode invasive (source)
From the scientific paper itself:
Given novel brain recordings, this decoder generates intelligible word sequences that recover the meaning of perceived speech, imagined speech, and even silent videos, demonstrating that a single language decoder can be applied to a range of semantic tasks.
Past mind-reading techniques relied on implanting electrodes deep in peoples' brains. The new method instead relies on a noninvasive brain scanning technique called functional magnetic resonance imaging (fMRI) (IRM, fait avec un scanner) to reconstruct language.
This algorithm designed by a team at the University of Texas can âreadâ the words that a person is hearing or thinking during a functional magnetic resonance imaging (fMRI) brain scan.
âIf you had asked any cognitive neuroscientist in the world twenty years ago if this was doable, they would have laughed you out of the roomâ
Using such fMRI data  for this type of research is difficult because it is rather slow compared to the speed of human thoughts.Â
Instead of detecting the firing of neurons, which happens on the scale of milliseconds, MRI machines measure changes in blood flow within the brain as proxies for brain activity; such changes take seconds.Â
The reason the setup in this research works is that the system is not decoding language word-for-word, but rather discerning the higher-level meaning of a sentence or thought.Â
The algorithm was trained with fMRI brain recordings taken as three study subjectsâone woman and 2 men, all in their 20s or 30sâlistened to 16 hours of podcasts and radio stories.
However, it does have some shortcomings; for example, it isnât very good at conserving pronouns and often mixes up first- and third-person. The decoder, says Huth, âknows whatâs happening pretty accurately, but not who is doing the things.âÂ
si ce n'est que ça le problÚme, ça paraßt déjà incroyable !
notable from a privacy point of view is that a decoder trained on one individualâs brain scans could not reconstruct language from another individual. So someone would need to participate in extensive training sessions before their thoughts could be accurately decoded.Â
Sam Nastase, a researcher and lecturer at the Princeton Neuroscience Institute who was not involved in the research, says using fMRI recordings for this type of brain decoding is âmind blowing,â since such data are typically so slow and noisy.
Since the decoder uses noninvasive fMRI brain recordings, it has higher potential for real-world application than do invasive methods, though the expense and inconvenience of using MRI machines is an obvious challenge.Â
Wow :Â Â the results reveal which parts of the brain are responsible for creating meaning. By using the decoder on recordings of specific areas such as the prefrontal cortex or the parietal temporal cortex, the team could determine which part was representing what semantic information.Â
Les derniĂšres newsletters :
Lâaddition ?
Cette newsletter est gratuite, si vous souhaitez m'encourager Ă continuer ce modeste travail de curation et de synthĂšse, vous pouvez prendre quelques secondes pour :
transférer cet email à un(e) ami(e)
étoiler cet email dans votre boßte mail
cliquer sur le coeur en bas dâemail
Un grand merci d'avance ! đ
Ici pour sâinscrire et recevoir les prochains emails si on vous a transfĂ©rĂ© celui-ci.
Quelques mots sur le cuistot
J'ai Ă©crit plus de 50 articles ces derniĂšres annĂ©es, Ă retrouver ici, dont une bonne partie publiĂ©s dans des mĂ©dias comme le Journal du Net (mes chroniques ici), le Huffington Post, L'Express, Les Ăchos.
Retrouvez ici mon podcast Parlons Futur (ou taper "Parlons Futur" dans votre appli de podcast favorite), vous y trouverez entre autres des interviews et des rĂ©sumĂ©s de livres (jâai notamment pu mener un entretien avec Jacques Attali).
Je suis CEO et co-fondateur de l'agence digitale KRDS, nous avons des bureaux dans 6 pays entre la France et l'Asie. Je suis basé à Singapour (mon Linkedin, mon Twitter), également membre du think tank NXU.
Merci, et bon weekend !
Thomas






