In Blog

On December 01, 2020, the United States Patent and Trademark Office (USPTO) published the patent US 10,853,717 B2, filed by Microsoft, which presents the possibility of digitally reincarnating a person as a chatbot.

To do that, Microsoft intends to use social data, such as images, voice data, social media posts, electronic messages and written letters, to train an artificial intelligence that, throughout machine learning, will be able to mimic a person.

According to patent’s abstract ‘The social data may be used to create or modify a special index in the theme of the specific person’s personality. The special index may be used to train a chat bot to converse in the personality of the specific person. During such conversations, one or more conversational data stores and/or APIs may be used to reply to user dialogue and/or questions for which the social data does not provide data.’.

The idea of using social data to create a simulacrum of a person has already been presented in fiction, notably in Black Mirror tv series. The first episode of the second season (‘Be Right Back’) tells the story of a young woman whose boyfriend is killed in a car crash. While mourning him, she discovers that technology allows her to communicate with an artificial intelligence that imitates her boyfriend.

The Microsoft’s patent that enables to digitally reincarnate a person as a chatbot

Fonte: www.forbes.com/

In real life, on February 06, 2020, a Korean TV show used virtual reality to reunite a mother with her seven-year-old daughter who died in 2016. For eight months, the production team has used VR technology to implement the late child’s face, body, and voice into an interactable virtual character.  

Despite the ethical issues of ‘bringing someone back from the dead’, it is important to note Microsoft’s patent does not indicate the specific purpose of creating a chatbot of a deceased person. 

As a matter of fact, for Microsoft the use of social data to create a person’s chatbot is much more useful to help the company to improve its own customer service chatbots, as well as a enabling the creation of an efficient AI assistance.

On the other hand, though, the malicious use of such technology may increase the cases of identity theft. If it is common to see the hacking of social media accounts to perpetrate the sending of spam or malicious links, the use of a chatbot interpreting a person hacked may raise the numbers of such crimes due to the difficulty to identify the real source, if it was truly sent by the person or by a chatbot emulating him. 

Lawyer Author of the Comment: Carlos Eduardo Nelli Principe

Source | Source | Source | Source | Source

“If you want to learn more about this topic, contact the author or the managing partner, Dr. Cesar Peduti Filho.”
“Se quiser saber mais sobre este tema, contate o autor ou o Dr. Cesar Peduti Filho.”

Recommended Posts
Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.