Archive — Artificial intelligence

Microsoft’s robot editor confuses mixed-race Little Mix singersJim WatersonThe Guardian

Jade Thirlwall and Leigh-Anne Pinnock from Little Mix

How about this for dystopia? MSN have replaced human news editors with a robot powered by Microsoft artificial intelligence technology. The problem is, it has already begun making racist decisions.

And then, in case you thought the story wasn’t already absurd enough, this:

In advance of the publication of this article, staff at MSN were told to expect a negative article in the Guardian about alleged racist bias in the artificial intelligence software that will soon take their jobs.

Because they are unable to stop the new robot editor selecting stories from external news sites such as the Guardian, the remaining human staff have been told to stay alert and delete a version of this article if the robot decides it is of interest and automatically publishes it on MSN.com. They have also been warned that even if they delete it, the robot editor may overrule them and attempt to publish it again.

Then the article ends on a delicious snippet — that Microsoft itself is concerned about the reputational damage this scheme will cause to its AI technology.

I’m immediately reminded of Microsoft’s disastrous Tay experiment.

Comment

The six main stories, as identified by a computer

The six main stories, as identified by a computer

We have all heard the idea that there are only a handful of different stories. Now we can feed stories into computers to see the six different story arcs that exist — the extrapolation of an idea first expressed by Kurt Vonnegut.

This may not seem like anything special, Vonnegut says—his actual words are, “it certainly looks like trash”—until he notices another well known story that shares this shape. “Those steps at the beginning look like the creation myth of virtually every society on earth. And then I saw that the stroke of midnight looked exactly like the unique creation myth in the Old Testament.” Cinderella’s curfew was, if you look at it on Vonnegut’s chart, a mirror-image downfall to Adam and Eve’s ejection from the Garden of Eden. “And then I saw the rise to bliss at the end was identical with the expectation of redemption as expressed in primitive Christianity. The tales were identical.”

Comment

What happens when you let computers optimise floorplans

Evolving floorplans

The rooms and expected flow of people are given to a genetic algorithm which attempts to optimize the layout to minimize walking time, the use of hallways, etc. The creative goal is to approach floor plan design solely from the perspective of optimization and without regard for convention, constructability, etc.

I’m not sure this would work in real life. But it’s a fascinating idea, and the floorplans are certainly interesting to look at.

Via Boing Boing.

Comment

Artificial intelligence for more human interfaces

Artificial intelligence for more human interfaces

A very balanced assessment of the benefits of artificial intelligence — and its dangers. It’s lengthy, but well worth your time, containing lots of great examples of how artificial intelligence can be a force for good, but tempering that with plenty of warnings against using it badly.

Nowadays, we expect any photo search to be able to understand “dog” and find photos of dogs… And this is where Deep Learning worked its magic.

The problem is that only a few interfaces of well-known, big companies give this convenience. And that makes people wonder who owns information and where they know all these things from.

Unless we democratise this convenience and build interfaces everywhere that are that clever, we have a problem. Users will keep giving only a few players their information and in comparison less greedy systems will fall behind.

The other big worry I have is that this convenience is sold as “magic” and “under the hood” and not explained. There is a serious lack of transparency about what was needed to get there.

Comment

Google Duplex is not creepy

Google Duplex is not creepy

Further to my point yesterday about why I don’t agree that Google’s new AI-powered phone calling technology is creepy.

…we live in a world where most restaurants and shops can only really be dealt with by phone – which is very convenient and nice, but (to varying degrees) it doesn’t work for deaf people, introverts, anyone with a speech impediment or social anxiety, or people from Glasgow. Those people have every right to a nice dinner and this makes it possible – or at least much easier.

Comment

Note — 2018-05-12

Lots of people think Google’s new AI-powered phone calls are creepy. I don’t quite follow this. Big companies have been making normal people speak to robots for decades. This isn’t a new concept. The difference is that this gives ordinary people the opportunity to do to big companies what big companies have been doing to them all along.

1 comment

AI don’t kill people, people do

AI don’t kill people, people do

Reflections on whether technological advances will ‘take our jobs’.

…[I]n Western societies, technical advancement has allowed many of us to extricate ourselves from physical, dangerous and demeaning forms of work, and to create careers that are fulfilling beyond renumeration: creatively, intellectually, socially… “job satisfaction”.

Historically, technological advances haven’t meant humans losing jobs. But it has meant we have taken on increasingly complex and interesting jobs. Perhaps the future will bring us further job satisfaction.

That’s not a bad place to be at all. A reminder that we should be grateful for the luxury we have in being able to pursue a good career in the first place, rather than slaving away to make ends meet.

See also: Why you shouldn’t follow your passion

Comment

Crash: how computers are setting us up for disaster

Crash: how computers are setting us up for disaster

The headline is slightly over-the-top. But this is nevertheless a fascinating long read on the paradox of automation — how our reliance on computers leaves us incompetent to act when we are needed the most.

First, automatic systems accommodate incompetence by being easy to operate and by automatically correcting mistakes. Because of this, an inexpert operator can function for a long time before his lack of skill becomes apparent – his incompetence is a hidden weakness that can persist almost indefinitely. Second, even if operators are expert, automatic systems erode their skills by removing the need for practice. Third, automatic systems tend to fail either in unusual situations or in ways that produce unusual situations, requiring a particularly skilful response.

1 comment