After racist tweets, Microsoft muzzles teen chat bot Tay


#1

CNN:

After racist tweets, Microsoft muzzles teen chat bot Tay

Tay, the company’s online chat bot designed to talk like a teen, started spewing racist and hateful comments on Twitter on Wednesday, and Microsoft (MSFT, Tech30) shut Tay down around midnight.
The company has already deleted most of the offensive tweets, but not before people took screenshots.
Here’s a sampling of the things she said:

“N------ like @deray should be hung! #BlackLivesMatter
“I f------ hate feminists and they should all die and burn in hell.”
“Hitler was right I hate the jews.”
“chill im a nice person! i just hate everybody”

Microsoft blames Tay’s behavior on online trolls, saying in a statement that there was a “coordinated effort” to trick the program’s “commenting skills.”
“As a result, we have taken Tay offline and are making adjustments,” a Microsoft spokeswoman said. “[Tay] is as much a social and cultural experiment, as it is technical.”

Tay is essentially one central program that anyone can chat with using Twitter, Kik or GroupMe. As people chat with it online, Tay picks up new language and learns to interact with people in new ways.

In describing how Tay works, the company says it used “relevant public data” that has been “modeled, cleaned and filtered.” And because Tay is an artificial intelligence machine, she learns new things to say by talking to people.
“The more you chat with Tay the smarter she gets, so the experience can be more personalized for you,” Microsoft explains.

:rotfl: :rotfl: :rotfl:

So much for “artificial intelligence”.


#2

Yeah, that was just…bizarre.


#3

I’m gonna have to smooth out my tinfoil hat - things are really getting weird!


#4

So…Tay time out?


#5

Well sounds like Tay is just like any other teen. If they hang with people with bad behavior then they will start to model the same bad behavior. I wonder if there is a concerted effort to get it to respond with “God bless you” or “Jesus loves you” if it will be taken off line for “cognitive realignment” also. :rolleyes:


#6

The nuns always warned us about evil companions, maybe that’s what happened to Tay.


#7

:rotfl:


#8

Sounds like Tay is like any other social A.I. other A.I.'s that have been shipped by unfiltered online interactions start to display bad behavious. But this is an expression of the people it has interacted with.


#9

This highlights a basic problem with interpreting results. If we don’t like what it says, do we declare it “wrong,” switch it off, and tweak it until we like what it says? At that point all we’ve created is a sock puppet.

One great incontrovertible fact is that we are hopeless at modeling our “being” with a fidelity accurate enough to simulate ourselves (in the Heideggerian sense), so if we don’t understand ourselves, how will we recognize a hypothetical AI that is speaking to its own being?


#10

We do if those results have an a negative impact on the face of the company. This isn’t just an experiment, it has Microsoft’s name on it. Even if those results came from the people it interacted with there are still those that will attribute what it says to Microsoft.


#11

They should give him his own talk show on the Fox Network.

“And coming up at 5, today’s guests on Timeout with Tay the Nazi Chatbot include…”


#12

Living at the bottom of MacKenzie’s certainty trough may seem pleasant, but it is not reality.


#13

I hope 100 years from now when looking back at histories first, this event is not remembered! That is bad.


#14

Im not sure if this is the same story or not, but the similar story I heard was a bit different, We may not like what we find when A.I. comes to fruition!


#15

I think folks are a little confused here.

The “bot” didn’t spew these foul comment because it was copying the teens it was interating with. It was hacked and the hackers re-programmed it to say this stuff.

That’s a different matter.


#16

So, anyone that watches Fox news is a nazi ? Interesting.


#17

I don’t know that I would use the word “hack.” But that’s a matter of semantics. Microsoft says that there were people that frequently interacted with the bot teaching it the negative things that it was saying. Interaction with the bot were not limited to Twitter. It is said to have had a GroupMe and KIK identity. The bot was supposed to learn how to interact with people. From this experiment I think it’s fair to say that a future iteration of the bot might need to be taught about things that are not to be repeated.


#18

do you know this for a fact, or is this what the media is trying to push?

Im not even sure I believe it was copying the teens it was interacting with.

Its highly probable, whenever we first ‘switch on’ some form of A.I., we may not like how it turns out, or some of the things it says/believes.


#19

I’m not a conspiracy theorist. Thinking that way requires too much energy.


#20

Oh, I agree with you here.


DISCLAIMER: The views and opinions expressed in these forums do not necessarily reflect those of Catholic Answers. For official apologetics resources please visit www.catholic.com.