This summer, furniture company Kartell will start selling a new plastic chair designed by Philippe Starck – with some help.
The system used – not, perhaps, strictly an AI – was a generative design software platform from Autodesk. Supplied with initial design goals, along with parameters such as materials, manufacturing methods and cost constraints, the software explores all the possible permutations of a solution to generate design alternatives. It tests and learns from each iteration what works and what doesn’t.
“As the relationship between the two matured, the system became a much stronger collaborative partner, and began to anticipate Starck’s preferences and the way he likes to work,” says Mark Davis, senior director of design futures at Autodesk.
The final result, a sleek and streamlined dining chair with a comfortable seat, has itself been named ‘AI’.
In the later stages of the design process, there was significant human involvement – not least because the Autodesk software had difficulty making the chairs stackable. Nor were all its designs particularly beautiful.
However, says Mr Starck, “AI is the first chair designed outside of our brain, outside of our habits of thought.”
The Washington Post has a broad remit and a well-staffed newsroom, but naturally lacked the resources to cover every single high school football game in the area. That is, until it put its AI reporter Heliograf on the case.
Based on data supplied by high school football coaches, the system identifies what’s important, matches it to a template, and then publishes short reports across several platforms.
Similarly, the Associated Press has used robots to automate some of its earnings coverage, and says that AI has freed up 20 per cent of reporters’ time, while at the same time cutting the error rate. Bloomberg does the same, with around a third of its coverage having been produced with the help of automation.
And the Los Angeles Times uses similar technology to publish earthquake alerts, sometimes within minutes of the shaking starting.
So far, AI isn’t producing longer-form articles on its own, but it is already helping journalists to do so. Forbes, for example, has a content management system called Bertie that suggests real-time trending topics to cover, appropriate imagery and even compelling headlines.
Last year, an ad for the luxury car brand Lexus boosted sales in Europe by 35 per cent more than expected.
The ad showed a craftsman finishing his work on the ES sedan before watching it go out into the world. The car’s just about to crash dramatically when the automatic emergency braking system cuts in, saving it from destruction.
The ad was directed by Kevin Macdonald, director of The Last King of Scotland and Whitney, but was written by an AI developed by creative agency The&Partnership London and marketing technology firm Visual Voice, and based on IBM Watson.
Watson was fed data on 15 years’ worth of successful car ads, as well as those for other luxury brands, along with data on human emotional intelligence and intuition. It opted for limited dialogue and certain visually appealing scenes: for example, a winding road with trees on one side and water on the other.
“I thought I’d be writing an ad with the assistance of AI. Instead it took over and wrote the whole script,” says Dave Bedwood, creative partner at The&Partnership.
He does, though, describe the story as “charmingly simplistic” – it seems there’s room for human beings in the process still.
Musicians have always experimented with technology, and composition is no exception; several companies have created AI-based systems that can write short pieces of music.
OpenAI’s MuseNet, for example, can generate four-minute musical compositions with ten different instruments, and can combine different musical styles from country to Mozart to the Beatles.
Meanwhile, IBM’s Watson Beat has been given the basic principles of musical theory, and can create short pieces of music when provided with a few seconds of melody and instructions on mood, genre and tempo.
Writing longer pieces of music is a taller order for an AI, thanks to the sheer level of complexity involved. Multiple motifs and phrases, repetition and rhythm based on relative distances and recurring intervals, rather than absolute timing, all present problems.
Google’s Music Transformer, part of its Project Magenta, is aimed at overcoming these problems through the use of ‘relative attention’, focusing on relational features. When given Chopin’s ‘Black Key Etude’ as a starting point, it was able to produce a song including many of the original piece’s motifs that was consistent in style throughout.
Developing new fragrances is about more than creating the next Chanel No 5; they’re used in everything from deodorant to washing powder and air fresheners. And while the high retail price of a designer perfume makes expensive ingredients cost-effective, this isn’t the case when it comes to fabric conditioner.
As a result, fragrance producer Symrise teamed up with IBM to create Philyra, a machine-learning system that sifts through hundreds of thousands of formulas and thousands of raw materials. It can access fragrance formulas, data about fragrance families – fruity, oriental or flowery – and historical data, helping identify patterns and new combinations of ingredients.
“Philyra’s understanding of consumer preferences and knowledge of formulas and ingredients led to new fragrance combinations, which allowed our perfumers to accelerate the creative design process and focus on perfecting the final products,” says Alexandre Bouza, marketing director of Brazilian cosmetics manufacturer O Boticario, one of Symrise’s customers.
The resulting two fragrances, including one designed specifically for Brazilian millennials, are set to come to the market this year. Symrise is also planning to introduce Philyra into its Perfumery School to help train the next generation of perfumers.