Tag Archives: robots

What depressed robots can teach us about mental health | Zachary Mainen

Depression seems a uniquely human way of suffering, but surprising new ways of thinking about it are coming from the field of artificial intelligence. Worldwide, over 350 million people have depression, and rates are climbing. The success of today’s generation of AI owes much to studies of the brain. Might AI return the favour and shed light on mental illness?

The central idea of computational neuroscience is that similar issues face any intelligent agent – human or artificial – and therefore call for similar sorts of solutions. Intelligence of any form is thought to depend on building a model of the world – a map of how things work that allows its owner to make predictions, plan and take actions to achieve its goals.

Setting the right degree of flexibility in learning is a critical problem for an intelligent system. A person’s model of the world is built up slowly over years of experience. Yet sometimes everything changes from one day to the next – if you move to a foreign country, for instance. This calls for much more flexibility than usual. In AI, a global parameter that controls how flexible a model is – how fast it changes – is called the “learning rate”.

Failure to adapt to adversity may be one of the main reasons why humans get depressed. For example, someone who becomes disabled due to a severe injury suddenly needs to learn to view themselves in a new way. A person who does so may thrive, while a person who fails to may become depressed.

The idea of a depressed AI seems odd, but machines could face similar problems. Imagine a robot with a hardware malfunction. Perhaps it needs to learn a new way of grasping information. If its learning rate is not high enough, it may lack the flexibility to change its algorithms. If severely damaged, it might even need to adopt new goals. If it fails to adapt it could give up and stop trying.

A “depressed” AI could be easily fixed by a supervisor boosting its learning rate. But imagine an AI sent light years away to another solar system. It would need to set its own learning rate, and this could go wrong.

One might think that the solution would be to keep flexibility high. But there is a cost to too much flexibility. If learning rate is too great, one is always forgetting what was previously learned and never accumulating knowledge. If goals are too flexible, an AI is rudderless, distracted by every new encounter.

The human brain’s equivalent of an AI’s key global variables is thought by computational psychiatrists to be several “neuromodulators”, including the dopamine and serotonin systems. There are only a handful of these highly privileged groups of cells and they broadcast their special chemical messages to almost the entire brain.

A line of studies from my laboratory and others suggest that the brain’s way of setting the learning rate involves the serotonin system. In the lab, if we teach a mouse a task with certain rules and then abruptly change them, serotonin neurons respond strongly. They seem to be broadcasting a signal of surprise: “Oops! Time to change the model.” Then, when serotonin is released in downstream brain areas, it can be seen in the laboratory to promote plasticity or rewiring, particularly to rework the circuitry of an outdated model.

Antidepressants are typically selective serotonin reuptake inhibitors (SSRIs), which boost the availability of serotonin in the brain. Antidepressants are naively depicted as “happiness pills”, but this research suggests that they actually work mainly by promoting brain plasticity. If true, getting out of depression starts with flexibility.

If these ideas are on the right track, susceptibility to depression is one of the costs of the ability to adapt to an ever-changing environment. Today’s AIs are learning machines, but highly specialised ones with no autonomy. As we take steps toward more flexible “general AI”, we can expect to learn more about how this can go wrong, with more lessons for understanding not only depression but also conditions such as schizophrenia.

For a human, to be depressed is not merely to have a problem with learning, but to experience profound suffering. That is why, above all else, it is a condition that deserves our attention. For a machine, what looks like depression may involve no suffering whatsoever. But that does not mean that we cannot learn from machines how human brains might go wrong.

Zachary Mainen is a neuroscientist whose research focuses on the brain mechanisms of decision-making

What depressed robots can teach us about mental health | Zachary Mainen

Depression seems a uniquely human way of suffering, but surprising new ways of thinking about it are coming from the field of artificial intelligence. Worldwide, over 350 million people have depression, and rates are climbing. The success of today’s generation of AI owes much to studies of the brain. Might AI return the favour and shed light on mental illness?

The central idea of computational neuroscience is that similar issues face any intelligent agent – human or artificial – and therefore call for similar sorts of solutions. Intelligence of any form is thought to depend on building a model of the world – a map of how things work that allows its owner to make predictions, plan and take actions to achieve its goals.

Setting the right degree of flexibility in learning is a critical problem for an intelligent system. A person’s model of the world is built up slowly over years of experience. Yet sometimes everything changes from one day to the next – if you move to a foreign country, for instance. This calls for much more flexibility than usual. In AI, a global parameter that controls how flexible a model is – how fast it changes – is called the “learning rate”.

Failure to adapt to adversity may be one of the main reasons why humans get depressed. For example, someone who becomes disabled due to a severe injury suddenly needs to learn to view themselves in a new way. A person who does so may thrive, while a person who fails to may become depressed.

The idea of a depressed AI seems odd, but machines could face similar problems. Imagine a robot with a hardware malfunction. Perhaps it needs to learn a new way of grasping information. If its learning rate is not high enough, it may lack the flexibility to change its algorithms. If severely damaged, it might even need to adopt new goals. If it fails to adapt it could give up and stop trying.

A “depressed” AI could be easily fixed by a supervisor boosting its learning rate. But imagine an AI sent light years away to another solar system. It would need to set its own learning rate, and this could go wrong.

One might think that the solution would be to keep flexibility high. But there is a cost to too much flexibility. If learning rate is too great, one is always forgetting what was previously learned and never accumulating knowledge. If goals are too flexible, an AI is rudderless, distracted by every new encounter.

The human brain’s equivalent of an AI’s key global variables is thought by computational psychiatrists to be several “neuromodulators”, including the dopamine and serotonin systems. There are only a handful of these highly privileged groups of cells and they broadcast their special chemical messages to almost the entire brain.

A line of studies from my laboratory and others suggest that the brain’s way of setting the learning rate involves the serotonin system. In the lab, if we teach a mouse a task with certain rules and then abruptly change them, serotonin neurons respond strongly. They seem to be broadcasting a signal of surprise: “Oops! Time to change the model.” Then, when serotonin is released in downstream brain areas, it can be seen in the laboratory to promote plasticity or rewiring, particularly to rework the circuitry of an outdated model.

Antidepressants are typically selective serotonin reuptake inhibitors (SSRIs), which boost the availability of serotonin in the brain. Antidepressants are naively depicted as “happiness pills”, but this research suggests that they actually work mainly by promoting brain plasticity. If true, getting out of depression starts with flexibility.

If these ideas are on the right track, susceptibility to depression is one of the costs of the ability to adapt to an ever-changing environment. Today’s AIs are learning machines, but highly specialised ones with no autonomy. As we take steps toward more flexible “general AI”, we can expect to learn more about how this can go wrong, with more lessons for understanding not only depression but also conditions such as schizophrenia.

For a human, to be depressed is not merely to have a problem with learning, but to experience profound suffering. That is why, above all else, it is a condition that deserves our attention. For a machine, what looks like depression may involve no suffering whatsoever. But that does not mean that we cannot learn from machines how human brains might go wrong.

Zachary Mainen is a neuroscientist whose research focuses on the brain mechanisms of decision-making

What depressed robots can teach us about mental health | Zachary Mainen

Depression seems a uniquely human way of suffering, but surprising new ways of thinking about it are coming from the field of artificial intelligence. Worldwide, over 350 million people have depression, and rates are climbing. The success of today’s generation of AI owes much to studies of the brain. Might AI return the favour and shed light on mental illness?

The central idea of computational neuroscience is that similar issues face any intelligent agent – human or artificial – and therefore call for similar sorts of solutions. Intelligence of any form is thought to depend on building a model of the world – a map of how things work that allows its owner to make predictions, plan and take actions to achieve its goals.

Setting the right degree of flexibility in learning is a critical problem for an intelligent system. A person’s model of the world is built up slowly over years of experience. Yet sometimes everything changes from one day to the next – if you move to a foreign country, for instance. This calls for much more flexibility than usual. In AI, a global parameter that controls how flexible a model is – how fast it changes – is called the “learning rate”.

Failure to adapt to adversity may be one of the main reasons why humans get depressed. For example, someone who becomes disabled due to a severe injury suddenly needs to learn to view themselves in a new way. A person who does so may thrive, while a person who fails to may become depressed.

The idea of a depressed AI seems odd, but machines could face similar problems. Imagine a robot with a hardware malfunction. Perhaps it needs to learn a new way of grasping information. If its learning rate is not high enough, it may lack the flexibility to change its algorithms. If severely damaged, it might even need to adopt new goals. If it fails to adapt it could give up and stop trying.

A “depressed” AI could be easily fixed by a supervisor boosting its learning rate. But imagine an AI sent light years away to another solar system. It would need to set its own learning rate, and this could go wrong.

One might think that the solution would be to keep flexibility high. But there is a cost to too much flexibility. If learning rate is too great, one is always forgetting what was previously learned and never accumulating knowledge. If goals are too flexible, an AI is rudderless, distracted by every new encounter.

The human brain’s equivalent of an AI’s key global variables is thought by computational psychiatrists to be several “neuromodulators”, including the dopamine and serotonin systems. There are only a handful of these highly privileged groups of cells and they broadcast their special chemical messages to almost the entire brain.

A line of studies from my laboratory and others suggest that the brain’s way of setting the learning rate involves the serotonin system. In the lab, if we teach a mouse a task with certain rules and then abruptly change them, serotonin neurons respond strongly. They seem to be broadcasting a signal of surprise: “Oops! Time to change the model.” Then, when serotonin is released in downstream brain areas, it can be seen in the laboratory to promote plasticity or rewiring, particularly to rework the circuitry of an outdated model.

Antidepressants are typically selective serotonin reuptake inhibitors (SSRIs), which boost the availability of serotonin in the brain. Antidepressants are naively depicted as “happiness pills”, but this research suggests that they actually work mainly by promoting brain plasticity. If true, getting out of depression starts with flexibility.

If these ideas are on the right track, susceptibility to depression is one of the costs of the ability to adapt to an ever-changing environment. Today’s AIs are learning machines, but highly specialised ones with no autonomy. As we take steps toward more flexible “general AI”, we can expect to learn more about how this can go wrong, with more lessons for understanding not only depression but also conditions such as schizophrenia.

For a human, to be depressed is not merely to have a problem with learning, but to experience profound suffering. That is why, above all else, it is a condition that deserves our attention. For a machine, what looks like depression may involve no suffering whatsoever. But that does not mean that we cannot learn from machines how human brains might go wrong.

Zachary Mainen is a neuroscientist whose research focuses on the brain mechanisms of decision-making

Rise of the robots and all the lonely people | Letters

Two connected stories in Monday’s Guardian: Tom Watson asks us to “embrace an android” while Rachel Reeves describes society’s sixth giant evil as a “crisis of loneliness”.

Replacing people with machines decreases opportunities for social interactions helping many feel integrated. Self-service in shops, libraries, banks and other places means people can go all day without conversation with a “real” person. It is set to worsen, to the detriment of contact and service quality. 

It is no coincidence that Lidl, the fastest-growing supermarket, resisted moves to self-service tills until recently. Self-service remains widely disliked. Nor does replacing staff with machines always improve service. Here in York, Virgin Trains plans to replace its knowledgeable and efficient station ticket office staff with machines. It’s an unpopular move opposed by over 3,000 petitioners, but Virgin ploughs on like the American railroad magnate Cornelius Vanderbilt, who reputedly said “the public be damned”.

Observation shows that staff are usually far quicker at issuing tickets, especially in complex ticket orders. Machines (when not all out of order) cannot answer the variety of questions the public ask. Nor can they help the many still suffering functional illiteracy.

Some automation brings benefits – but not, please, to fully replace human interaction.
Roger Backhouse
York

On Friday, the Jo Cox Commission on Loneliness releases its findings. We, its members, are proud to take forward Jo’s vision of a more connected world. As Jo said: “Young or old, loneliness doesn’t discriminate.”

Nine million people of all ages in the UK are always or often lonely. Loneliness is as bad for us as smoking 15 cigarettes a day. However, the report shows we can all tackle loneliness. Businesses, government, charities and the public have a role, and Christmas is the perfect time to begin. Start a conversation to help us build Jo’s legacy of a less lonely society.
Tony Hawkhead Action for Children, Steve Murrells Co-op, Laura Alcock-Ferguson Campaign to End Loneliness, Mike Adamson British Red Cross, Jeremy Hughes Alzheimer’s Society, Stephen Hale Refugee Action, Catherine Johnstone Royal Voluntary Service, Richard Kramer Sense, Sophie Andrews The Silver Line, Janet Morrison Independent Age, Heléna Herklots Carers UK, Caroline Abrahams Age UK, Peter Stewart Eden Project

Join the debate – email guardian.letters@theguardian.com

Read more Guardian letters – click here to visit gu.com/letters

Rise of the robots and all the lonely people | Letters

Two connected stories in Monday’s Guardian: Tom Watson asks us to “embrace an android” while Rachel Reeves describes society’s sixth giant evil as a “crisis of loneliness”.

Replacing people with machines decreases opportunities for social interactions helping many feel integrated. Self-service in shops, libraries, banks and other places means people can go all day without conversation with a “real” person. It is set to worsen, to the detriment of contact and service quality. 

It is no coincidence that Lidl, the fastest-growing supermarket, resisted moves to self-service tills until recently. Self-service remains widely disliked. Nor does replacing staff with machines always improve service. Here in York, Virgin Trains plans to replace its knowledgeable and efficient station ticket office staff with machines. It’s an unpopular move opposed by over 3,000 petitioners, but Virgin ploughs on like the American railroad magnate Cornelius Vanderbilt, who reputedly said “the public be damned”.

Observation shows that staff are usually far quicker at issuing tickets, especially in complex ticket orders. Machines (when not all out of order) cannot answer the variety of questions the public ask. Nor can they help the many still suffering functional illiteracy.

Some automation brings benefits – but not, please, to fully replace human interaction.
Roger Backhouse
York

On Friday, the Jo Cox Commission on Loneliness releases its findings. We, its members, are proud to take forward Jo’s vision of a more connected world. As Jo said: “Young or old, loneliness doesn’t discriminate.”

Nine million people of all ages in the UK are always or often lonely. Loneliness is as bad for us as smoking 15 cigarettes a day. However, the report shows we can all tackle loneliness. Businesses, government, charities and the public have a role, and Christmas is the perfect time to begin. Start a conversation to help us build Jo’s legacy of a less lonely society.
Tony Hawkhead Action for Children, Steve Murrells Co-op, Laura Alcock-Ferguson Campaign to End Loneliness, Mike Adamson British Red Cross, Jeremy Hughes Alzheimer’s Society, Stephen Hale Refugee Action, Catherine Johnstone Royal Voluntary Service, Richard Kramer Sense, Sophie Andrews The Silver Line, Janet Morrison Independent Age, Heléna Herklots Carers UK, Caroline Abrahams Age UK, Peter Stewart Eden Project

Join the debate – email guardian.letters@theguardian.com

Read more Guardian letters – click here to visit gu.com/letters

Rise of the robots and all the lonely people | Letters

Two connected stories in Monday’s Guardian: Tom Watson asks us to “embrace an android” while Rachel Reeves describes society’s sixth giant evil as a “crisis of loneliness”.

Replacing people with machines decreases opportunities for social interactions helping many feel integrated. Self-service in shops, libraries, banks and other places means people can go all day without conversation with a “real” person. It is set to worsen, to the detriment of contact and service quality. 

It is no coincidence that Lidl, the fastest-growing supermarket, resisted moves to self-service tills until recently. Self-service remains widely disliked. Nor does replacing staff with machines always improve service. Here in York, Virgin Trains plans to replace its knowledgeable and efficient station ticket office staff with machines. It’s an unpopular move opposed by over 3,000 petitioners, but Virgin ploughs on like the American railroad magnate Cornelius Vanderbilt, who reputedly said “the public be damned”.

Observation shows that staff are usually far quicker at issuing tickets, especially in complex ticket orders. Machines (when not all out of order) cannot answer the variety of questions the public ask. Nor can they help the many still suffering functional illiteracy.

Some automation brings benefits – but not, please, to fully replace human interaction.
Roger Backhouse
York

On Friday, the Jo Cox Commission on Loneliness releases its findings. We, its members, are proud to take forward Jo’s vision of a more connected world. As Jo said: “Young or old, loneliness doesn’t discriminate.”

Nine million people of all ages in the UK are always or often lonely. Loneliness is as bad for us as smoking 15 cigarettes a day. However, the report shows we can all tackle loneliness. Businesses, government, charities and the public have a role, and Christmas is the perfect time to begin. Start a conversation to help us build Jo’s legacy of a less lonely society.
Tony Hawkhead Action for Children, Steve Murrells Co-op, Laura Alcock-Ferguson Campaign to End Loneliness, Mike Adamson British Red Cross, Jeremy Hughes Alzheimer’s Society, Stephen Hale Refugee Action, Catherine Johnstone Royal Voluntary Service, Richard Kramer Sense, Sophie Andrews The Silver Line, Janet Morrison Independent Age, Heléna Herklots Carers UK, Caroline Abrahams Age UK, Peter Stewart Eden Project

Join the debate – email guardian.letters@theguardian.com

Read more Guardian letters – click here to visit gu.com/letters

Rise of the robots and all the lonely people | Letters

Two connected stories in Monday’s Guardian: Tom Watson asks us to “embrace an android” while Rachel Reeves describes society’s sixth giant evil as a “crisis of loneliness”.

Replacing people with machines decreases opportunities for social interactions helping many feel integrated. Self-service in shops, libraries, banks and other places means people can go all day without conversation with a “real” person. It is set to worsen, to the detriment of contact and service quality. 

It is no coincidence that Lidl, the fastest-growing supermarket, resisted moves to self-service tills until recently. Self-service remains widely disliked. Nor does replacing staff with machines always improve service. Here in York, Virgin Trains plans to replace its knowledgeable and efficient station ticket office staff with machines. It’s an unpopular move opposed by over 3,000 petitioners, but Virgin ploughs on like the American railroad magnate Cornelius Vanderbilt, who reputedly said “the public be damned”.

Observation shows that staff are usually far quicker at issuing tickets, especially in complex ticket orders. Machines (when not all out of order) cannot answer the variety of questions the public ask. Nor can they help the many still suffering functional illiteracy.

Some automation brings benefits – but not, please, to fully replace human interaction.
Roger Backhouse
York

On Friday, the Jo Cox Commission on Loneliness releases its findings. We, its members, are proud to take forward Jo’s vision of a more connected world. As Jo said: “Young or old, loneliness doesn’t discriminate.”

Nine million people of all ages in the UK are always or often lonely. Loneliness is as bad for us as smoking 15 cigarettes a day. However, the report shows we can all tackle loneliness. Businesses, government, charities and the public have a role, and Christmas is the perfect time to begin. Start a conversation to help us build Jo’s legacy of a less lonely society.
Tony Hawkhead Action for Children, Steve Murrells Co-op, Laura Alcock-Ferguson Campaign to End Loneliness, Mike Adamson British Red Cross, Jeremy Hughes Alzheimer’s Society, Stephen Hale Refugee Action, Catherine Johnstone Royal Voluntary Service, Richard Kramer Sense, Sophie Andrews The Silver Line, Janet Morrison Independent Age, Heléna Herklots Carers UK, Caroline Abrahams Age UK, Peter Stewart Eden Project

Join the debate – email guardian.letters@theguardian.com

Read more Guardian letters – click here to visit gu.com/letters

Rise of the robots and all the lonely people | Letters

Two connected stories in Monday’s Guardian: Tom Watson asks us to “embrace an android” while Rachel Reeves describes society’s sixth giant evil as a “crisis of loneliness”.

Replacing people with machines decreases opportunities for social interactions helping many feel integrated. Self-service in shops, libraries, banks and other places means people can go all day without conversation with a “real” person. It is set to worsen, to the detriment of contact and service quality. 

It is no coincidence that Lidl, the fastest-growing supermarket, resisted moves to self-service tills until recently. Self-service remains widely disliked. Nor does replacing staff with machines always improve service. Here in York, Virgin Trains plans to replace its knowledgeable and efficient station ticket office staff with machines. It’s an unpopular move opposed by over 3,000 petitioners, but Virgin ploughs on like the American railroad magnate Cornelius Vanderbilt, who reputedly said “the public be damned”.

Observation shows that staff are usually far quicker at issuing tickets, especially in complex ticket orders. Machines (when not all out of order) cannot answer the variety of questions the public ask. Nor can they help the many still suffering functional illiteracy.

Some automation brings benefits – but not, please, to fully replace human interaction.
Roger Backhouse
York

On Friday, the Jo Cox Commission on Loneliness releases its findings. We, its members, are proud to take forward Jo’s vision of a more connected world. As Jo said: “Young or old, loneliness doesn’t discriminate.”

Nine million people of all ages in the UK are always or often lonely. Loneliness is as bad for us as smoking 15 cigarettes a day. However, the report shows we can all tackle loneliness. Businesses, government, charities and the public have a role, and Christmas is the perfect time to begin. Start a conversation to help us build Jo’s legacy of a less lonely society.
Tony Hawkhead Action for Children, Steve Murrells Co-op, Laura Alcock-Ferguson Campaign to End Loneliness, Mike Adamson British Red Cross, Jeremy Hughes Alzheimer’s Society, Stephen Hale Refugee Action, Catherine Johnstone Royal Voluntary Service, Richard Kramer Sense, Sophie Andrews The Silver Line, Janet Morrison Independent Age, Heléna Herklots Carers UK, Caroline Abrahams Age UK, Peter Stewart Eden Project

Join the debate – email guardian.letters@theguardian.com

Read more Guardian letters – click here to visit gu.com/letters

Robots don’t challenge surgeons such as me – they challenge dogmatic practice | Ara Darzi

On Friday 10 March, I will perform an operation in public for the first time. In a live demonstration, I will aim to show how robots can assist surgeons to cut more safely, with greater precision, and achieve better results for patients.

I should say at the outset that no patient’s life will be put at risk during this event. I will be operating on a surgical mannequin – a specially adapted version of the shop mannequin designed to respond like a human body – and the event will take place at the Science Museum in London.

I will be using the same surgical robot that I used in 2001 when I performed the first such operation on a patient in the UK. It has three arms controlled from a console a few feet away, where I sit, allowing me to cut and stitch with great precision. Almost 16 years on, this will be a nostalgic moment for me. From cutting-edge technology to museum piece in less than two decades.

I am taking part in this demonstration, together with Professor Roger Kneebone, head of the Centre for Engagement at Imperial College, because I know that technological innovation of the kind represented by the robot has transformed surgery. But it will only continue to do so in the future if we have the vision and the courage to support it.

Critics will say that past technological advances have not delivered on their early promise. Certainly there have been challenges. Last year a research paper published in the Lancet comparing robotic with non-robotic surgery for prostate cancer found both achieved similar outcomes after three months.

The Times reported the story under the headline “Robots no better than human surgeons”. The Daily Mail, however, went with “Robots are better than humans at cancer ops”, on the grounds that the patients who had the robot surgery suffered less pain immediately after the operation. Is the glass half-full? Or half-empty?

I am firmly in the former camp. As I wrote in the Lancet at the time, the fact that the robot-assisted surgery achieved an equivalent outcome should be seen as a positive result. It shows that the innovation has preserved the intended purpose of the surgery. Advances in technology such as this provide the platform on which additional innovations can be developed, to further improve the quality and safety of surgery.


The device, called the iknife, can detect almost instantly whether tissue is cancerous or not

Consider where we have come from: in little more than 100 years since the two-part silver scalpel, with handle and replaceable blade, was invented by Morgan Parker in 1915, it has increasingly been replaced by the electrosurgical knife – a probe carrying an electric current that burns through tissue, sealing the tiny capillaries as it cuts, reducing blood loss, improving the surgeon’s field of view and the speed of the surgery.

Now a third advance is imminent, with the invention of an electronic “nose” attached to the electrosurgical knife. This absorbs the smoke given off as the blade burns through tissue and analyses it in a mass spectrometer. The device, called the intelligent knife or iknife, can detect almost instantly what kind of tissue the surgeon is cutting through – whether, for instance, it is cancerous or not. Instead of sending tissue samples to the laboratory and waiting days or weeks for them to be tested, the surgeon will in future be able to tell whether all the cancer has been removed before the operation is complete.

Membership Event: Robot Surgery Live

Advances such as this are ushering in a new era of precision surgery, in which established clinical and pathological signs are linked with state-of-the-art molecular profiling, enabling us for the first time to tailor specific interventions to the individual biology of the patient.

I was delighted with the interest and enthusiasm shown by the Science Museum in displaying the first surgical robot ever used in Britain as part of their robotics exhibition. It will remain with the museum as a donation from the department of surgery at Imperial College London.

But if we are to continue moving forward, we need disruptive innovators who are ready to challenge dogmatic practice and an environment in which they are free to experiment. What today looks revolutionary is tomorrow’s museum exhibit.

Teaching morality to robots | Daniel Glaser

Every week comes a new warning that robots are taking over our jobs. People have become troubled by the question of how robots will learn ethics, if they do take over our work and our planet.

As early on as the 1960s Isaac Asimov came up with the ‘Three Laws of Robotics’ outlining moral rules they should abide by. More recently there has been official guidance from the British Standards Institute advising designers how to create ethical robots, which is meant to avoid them taking over the world.

From a neuroscientist’s perspective, they should learn more from human development. We teach children morality before algebra. When they’re able to behave well in a social situation, we teach them language skills and more complex reasoning. It needs to happen this way round. Even the most sophisticated bomb-sniffing dog is taught to sit first.

If we’re interested in really making robots think more like we do, we can’t retrofit morality and ethics. We need to focus on that first, build it into their core, and then teach them to drive.

Dr Daniel Glaser is director of Science Gallery at King’s College London. Listen to this week’s podcast at theguardian.com/lifeandstyle/series/neuroscientist-explains