This robot mimics human writing to a level that makes it difficult to distinguish whether it was written by a robot or by a person.
Learning to write to a process that not only consists of copying characters, but also learning how each of these characters is traced, the space between them either to form words or phrases and of course their meaning. For centuries, this task has been something unique to humans, but now the robots, with the help of artificial intelligence, they could be close to matching us.
Today, robots are capable of imitating or duplicating what we draw or write, but this is done “raw”, that is, it recognizes the image as a whole and reproduces it line by line, like a printer. Now, thanks to a project from Brown University in Rhode Island, United States, we can see how a robot is capable of writing and drawing like a human being, even in languages you don’t know and have never seen.
Can a robot imitate a stroke just by looking at the result?
Atsunobu Kotani, a student at Brown University, developed a machine learning algorithm that makes use of neural networks whose work consists of analyze pictures of handwritten words or sketches, this in order to deduce the succession of lines that originated them.
Stefanie Tellex, a robotics specialist at Brown University, was the one who developed the robotic system to which the algorithm was added to work on its own. The goal of this was to be able to create a robot capable of communicating fluently with humans.
Once you had the robot and the algorithm, it was trained using a set of Japanese characters. After this, the robot proved that it was capable of reproducing those characters using the strokes that created them with approximately 93% accuracy. But this was not the surprising thing, the surprise was when the robot was able to do the same with Latin characters, even in handwritten and italics. Namely, characters that I had never seen and did not know.
The key to this feat is in the algorithm developed by Kotani, which helps the robot decide where and how to place each stroke and its duration, which serves to distinguish each letter of the alphabet, as well as the order in which each letter or symbol must be placed to create the correct word.
The robot is based on two algorithm models that help you write for yourself:
Global Model– This allows the robot to look at the image as a whole, helping it decide the most likely starting point for the particular word or letter, as well as the most likely way to move to the next symbol or letter.
Local Model: this helps the robot to analyze each letter in a specific way, that is, how to make the correct movement and direction until how to finish the line of that character, as well as its placement, size and distance.
Stefanie Tellex pointed out that the robot does not always make the exact and correct strokes when writing the letters, but “it is quite close.” What is really important about this, he emphasizes, is how the algorithm is capable of generalize your ability to reproduce strokes.
“Much of today’s work in this area requires that the robot have information about the order of the strokes in advance. If you want the robot to write something, someone has to program the order of the strokes. With what they have done Atsu, you can draw whatever you want and the robot can reproduce it. It doesn’t always mimic every stroke perfectly, but it comes pretty close. “
Writing “hello” in 10 different languages
Once the robot proved its ability, the next thing was to put it to the test in other situations in order to confuse it. This is how they asked 10 people from the ‘Humans to Robots’ laboratory, where Tellex works, to wrote ‘hello’ in their native languages and using your own handwriting.
With this, they managed to have ‘hello’ written in Greek, Hindi, Urdu, Chinese, Yiddish and other languages, and the robot was able to reproduce them all with outstanding precision, which made it difficult for other people to determine which was the stroke made by the robot and which was the human, as they explain.
The next thing was to ask a group of 6-year-olds who also wrote ‘hello’, each with a “checkered” handwriting that was sometimes difficult to understand. According to those responsible for the robot, it was able to copy children’s handwriting with apparent ease.
The final proof: a sketch of the Mona Lisa
The definitive test that demonstrated the capabilities of this robot, which by the way does not yet have a name, was when Tellex drew a small sketch of the Mona Lisa, using basic and personal strokes, and then let the robot look at it and imitate it.
According to Tellex, the robot copied the sketch quite faithfully. Kotani related it as follows:
“When I got back to the lab, everyone was standing around the blackboard, looking at the Mona Lisa and wondering if I had drawn that. They couldn’t believe it. “
For the development team, that was the key moment when the robot proved it was beyond mere impression, since the robot was able to create an image with lines similar to those of a human being.
Tellex assures that the image of the Mona Lisa was made in August and to this day they keep it on the board as a sample of the robot’s capabilities.
“What makes this job unique is the robot’s ability to learn from scratch.”
The team responsible for the robot hopes that the ideas collected through their research can be used to build robots capable of leaving notes, or making dictations and sketches, all with the aim of being able to communicate with humans or, that they can serve as new communication tools. It could well be a step towards a new form of communication between humans and machines.