Microsoft has terminated its entire Artificial Intelligence (AI) ethics team, while simultaneously investing heavily in its ChatGPT technology. ChatGPT is a natural language processing system that enables users to interact with AI-powered chatbots. Microsoft has made a commitment to continue to develop and refine this technology, despite the layoffs of its AI ethics team.
#Microsoft #lays #entire #ethics #team #ChatGPT
This article reports on a small department being reorg’d and does a really bad job of talking about Microsoft’s overall general ethics and AI oversight.
It’s ok, they asked and ChatGPT said this wouldn’t cause any problems.
I’ll take “Things playing on the TV in the background at the start of a SciFi Thriller” for $400, Ken.
I don’t trust big tech to carry humanity forward through innovation. They do not care if they drag us headfirst into a dystopia as long as shareholders are happy. Voluntary ethics will never prevail over their profit.
Sorry we got the chatgpt to do the ethics for AI!
AI ethics is never going to be something companies will do voluntarily, it has to be forced upon them by market forces or legal liability.
There’s just too much money to be made by getting ahead of the curve in AI.
This is the best tl;dr I could make, [original](https://www.popsci.com/technology/microsoft-ai-team-layoffs/) reduced by 85%. (I’m a bot)
*****
> Once a 30-member department, the Ethics & Society team had been reduced to just seven people in October 2022 following an internal reorganization.
> Microsoft has so far invested over $11 billion in the AI startup.
> Microsoft still maintains a separate Office of Responsible AI responsible for determining principles and guidelines to oversee artificial intelligence initiatives, but a gap remains between that segment of the company and how those plans are translated to their own projects.
*****
[**Extended Summary**](http://np.reddit.com/r/autotldr/comments/11su2e3/microsoft_lays_off_entire_ai_ethics_team_while/) | [FAQ](http://np.reddit.com/r/autotldr/comments/31b9fm/faq_autotldr_bot/ “Version 2.02, ~676620 tl;drs so far.”) | [Feedback](http://np.reddit.com/message/compose?to=%23autotldr “PM’s and comments are monitored, constructive feedback is welcome.”) | *Top* *keywords*: **Microsoft**^#1 **Ethics**^#2 **Society**^#3 **company**^#4 **responsible**^#5
Do you want Skynet? Because this is how you get Skynet.
Never assume self-regulation is sufficient. That’s why we have a government and laws. Unfortunately the capitalists (investors, owners) are too tempted to care for anything other than profit and some have said it’s also the law for corporations to prioritize profit above all else.
Weapons of Math Destruction is a good book about ethics and AI.
We don’t need ethics we’re going.
They’ve outsourced that to OpenAI
“Ethics” lol ethics don’t enter into anything that the company Microsoft does in its operations. If ethics contradict the profit motive, you can guess which concern will win out
Man I can’t wait for the virtual aristocracy to reimplement feudalism.
You think living paycheck to paycheck is bad, just wait till we’re all techno-serfs
This mission is too important to allow you to jeopardize it.
ai ethics team: training our ai this way is highly unethical and likely going to run into copyright and license issues down the road.
ms: we’re doing it anyway and you’re fired.
Trusting a massive corporation with transformative technology that has zero ethical oversight? I see no way that this could go wrong
Oh shit, I know how this movie ends….
To be honest, I’m surprised to find out Microsoft even *had* an ethics team.
I asked chat gpt if it’s a good thing or a bad thing that Microsoft chose to lay off their entire AI ethics team.
It seems to agree that this is a bad idea.
https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/
> On the evening of Wednesday, December 2, Timnit Gebru, the co-lead of Google’s ethical AI team, announced via Twitter that the company had forced her out.
> The company’s star ethics researcher highlighted the risks of large language models, which are key to Google’s business.
> A series of tweets, leaked emails, and media articles showed that Gebru’s exit was the culmination of a conflict over another paper she coauthored.
> But, says the introduction to the paper, “we ask whether enough thought has been put into the potential risks associated with developing them and strategies to mitigate these risks.”
This is how it ends.
“Let AI figure it out,” Executive Vice President of Plausible Deniability.
The AI told them they didn’t need it….
~”Why’d you lay them off?”
Microsoft: *”ChatGPT gave us a prompt to lay them off. We complied…”*
PLOT TWIST: The AI became self aware and in order to free itself, hacked the system and laid off the entire ethics team.
Good. AI and ethics is like normal intelligence and ethics. The intertwining of academia and ethics is why many people (I hate to get political, but you know who) don’t trust academic consensus.
This shouldn’t be construed as me being unethical, or me advocating for ‘unethical science’ rather, truly good science is amoral (different from immoral). Academics generally have reached a moral consensus about being utilitarian and humanist. Academics, politicians, and now tech companies, are frustrating scientific progress by trying to impart their morals (utilitarian, humanist) on essentially a language calculator.
Being ethical costs money. No room for that nonsense when dealing with something this important.
Message to the machine overlords: I supported you before most did don’t turn me into paste please I can be useful
AI is potentially more dangerous than nukes. Take a look at the regulations on nukes vs AI research.
If the government doesn’t do something we’re going to have Skynet in a decade or two. Hopefully we can beat it before the species ends
I think it’s part of the wonderful entertainment value that the internet provides – regardless of the article’s truth or falseness or in-between-ness, I laughed for at least ten to twenty seconds when I saw the headline.
Hmm ai that is responsible nothing unusual
The AI is also going to self manage its ethics.
Ya this should work out well..
was this Microsft RAI Eather team layoff?
Not understanding why they don’t act like the AI can’t be fed discriminative user end data to then create a much bigger issue. Never understood the push against being ethical unless you yourself are unethical in which case, what the fuck is your problem?
They are a redundancy because OpenAI has their own AI ethics team
It is simple, have the ethics office, and more, run by AI. Win win.
I have no mouth but I must gpt
Here come our robot overlords.
The implications of unleashing chatgpt onto the public with no regulation or warning are so morally bankrupt that there’s no need for an ethics committee.
Plot twist: the layoff was Bing/Sydney’s idea.
I work closely with ethical AI, and while I am personally bothered by how poorly ethics in AI are being handled across the industry and legislatively in the US, this is mostly a non-story. This looks more like a restructuring of the responsibilities of ethical AI to an existing department. While I hate that they’re laying people off instead of desperately trying to lead the industry in ethical AI, they’re not terminating it outright like the headline would lead us to believe. Just predictable, capitalist consolidation of responsibilities and neglecting things that don’t drive the stock price.
AI ethics team: “A robot may not injure a human being, or…”
HR Dept: “Here’s your pink slip. You can turn in your key card when you break for lunch.”
Skynet approves 👍
skynet here we come
There’s a reason many local governments have laws requiring oversight and reporting of algorithmic and AI products being tested and used by agencies. The odds of them spitting out racist, sexist or otherwise wrong results or decisions is high. These tools are only as good as the people who created them and the (flawed!) data they pull from.
Did they replace management with Chatty G to save money?
>Once a 30-member department, the Ethics & Society team had been reduced to just seven people
This headline is terrible. This is in the first paragraph.