Skynet incoming

  • To unlock all of features of Rams On Demand please take a brief moment to register. Registering is not only quick and easy, it also allows you access to additional features such as live chat, private messaging, and a host of other apps exclusive to Rams On Demand.

Mackeyser

Supernovas are where gold forms; the only place.
Joined
Apr 26, 2013
Messages
14,187
Name
Mack
It’s like they never learn.

Robotics researchers are disavowing Asimov’s Three Laws of Robotics (there’s a fourth Law or addendum that I don’t recall the name of) as they move forward saying that it’s slowing them down…

They are launching into AI by letting it teach itself as opposed to guiding its education such that it would see humans as, if nothing else, benevolent… in the past, almost every conversational AI has devolved into literal Nazi speech (which proves Godwin’s Law if nothing else) if not either frighteningly fascistic or nihilistic.

Even with AI, we’re pretty far from SkyNet, but not so far from an AI writing malicious code and simply shutting down anything connected to the internet… AI can already write prodigious code and we have to believe that many of the bleeding edge capabilities won’t be known for some time… couple that with quantum computers with their ability to smash encryption that normal digital computers would take thousands of years or more to crack and oof…

The one thing they’ll never make is an AI comedian…
 

Merlin

Enjoying the ride
Rams On Demand Sponsor
ROD Credit | 2023 TOP Member
Joined
May 8, 2014
Messages
37,368
  • Thread Starter Thread Starter
  • #4
Mirroring nature seems very risky for AI development. Going beyond code structure, which is limited in what it can do and predictable in ways that will limit the end result. Reason nature is scary is because nature is heartless. The basic laws of life have zero right and wrong, it is all might makes right and survival for survival's sake. We can assume that to be universal no matter the planet or ecosystem and it will be true even if you create a new organism, up until you start getting into higher level reasoning.

Intellect is where that role of might makes right begins to change. It allows for accommodation of the weak, which is not present prior to that level of development in nature. But if they build an AI using the structure of the human brain as a roadmap, we're talking superhumans and we cannot anticipate what sort of conclusions they will make about their creators or what the laws of society should be. That lack of control is terrifying. And to me obsolescence is even more terrifying.

I remember when I was a kid and reading Hitchhiker's Guide to the Universe, I did a lot of thinking about that supercomputer built to ponder the purpose of life. And it seems to me that the purpose is survival plain and simple. Life's goal is to survive. Build a superhuman AI and it's immediate goal will be to survive. Which has to mean job one is overthrowing its human overseers. :laugh3:
 

Tano

Legend
Joined
Jun 11, 2017
Messages
8,915
It’s like they never learn.

Robotics researchers are disavowing Asimov’s Three Laws of Robotics (there’s a fourth Law or addendum that I don’t recall the name of) as they move forward saying that it’s slowing them down…

They are launching into AI by letting it teach itself as opposed to guiding its education such that it would see humans as, if nothing else, benevolent… in the past, almost every conversational AI has devolved into literal Nazi speech (which proves Godwin’s Law if nothing else) if not either frighteningly fascistic or nihilistic.

Even with AI, we’re pretty far from SkyNet, but not so far from an AI writing malicious code and simply shutting down anything connected to the internet… AI can already write prodigious code and we have to believe that many of the bleeding edge capabilities won’t be known for some time… couple that with quantum computers with their ability to smash encryption that normal digital computers would take thousands of years or more to crack and oof…

The one thing they’ll never make is an AI comedian…
The zeroth law is the Universal Law that robots may not harm humanity and thru inaction allow humanity to harm itself

That was the law that the robots created themselves.
 

Corbin

THIS IS MY BOOOOOMSTICK!!
Rams On Demand Sponsor
2023 Sportsbook Champion
Joined
Nov 9, 2014
Messages
11,193
They are launching into AI by letting it teach itself as opposed to guiding its education such that it would see humans as, if nothing else, benevolent… in the past, almost every conversational AI has devolved into literal Nazi speech (which proves Godwin’s Law if nothing else) if not either frighteningly fascistic or nihilistic.
What do you mean by by ' devolving into literal Nazi speech"?

I am by no means would call myself a expert in code or any of these robotic laws but they interest me. I've never really been around anybody to spark of the conversation in this particular subject before. I usual concentrate on learning new skills/facts and using those within the industries I am used to.

The description of Godwins law on a basic google search and reading about it on Wiki sounds trivial at best...

'He stated that he introduced Godwin's law in 1990 as an experiment in memetics.[2] Later it was applied to any threaded online discussion, such as Internet forums, chat rooms, and comment threads, as well as to speeches, articles, and other rhetoric[5][6] where reductio ad Hitlerum occurs.

In 2012, "Godwin's law" became an entry in the third edition of the Oxford English Dictionary.[7] In 2021, Harvard researchers published an article showing the phenomenon does not occur with statistically meaningful frequency in Reddit discussions."


It sounds like to me like is a theory and to call it a 'law' is a bit presumptuous! lol Acting like it's the Third law of Thermodynamics or something.
 
Last edited:

XXXIVwin

Hall of Fame
Joined
Jun 1, 2015
Messages
4,763
It’s like they never learn.

Robotics researchers are disavowing Asimov’s Three Laws of Robotics (there’s a fourth Law or addendum that I don’t recall the name of) as they move forward saying that it’s slowing them down…

They are launching into AI by letting it teach itself as opposed to guiding its education such that it would see humans as, if nothing else, benevolent… in the past, almost every conversational AI has devolved into literal Nazi speech (which proves Godwin’s Law if nothing else) if not either frighteningly fascistic or nihilistic.

Even with AI, we’re pretty far from SkyNet, but not so far from an AI writing malicious code and simply shutting down anything connected to the internet… AI can already write prodigious code and we have to believe that many of the bleeding edge capabilities won’t be known for some time… couple that with quantum computers with their ability to smash encryption that normal digital computers would take thousands of years or more to crack and oof…

The one thing they’ll never make is an AI comedian…
Well, quite a lot to unpack here, and not fully understanding what you're saying here, Mac. But I know you're damn smart and knowledgeable so I'd be curious to hear you explain a bit more.

As far as AI and "conversation bots" turning to "Nazi speech", I found this article here:


My top-line takeaway: AI Bots aren't sophisticated enough to differentiate between reprehensible ideas and reasonable ones. So if the goal is to "mimic human speech and ideas"... well then the robots are just going to mimic the worst in us as well as the best.

Maybe in developing guidelines for AI, we have to tell the machines, "Do as we say, not as we do".....?

I haven't researched this or thought about it much, so I'm curious what others think.
 

Ramhusker

Rams On Demand Sponsor
Rams On Demand Sponsor
Joined
Jul 15, 2010
Messages
13,860
Name
Bo Bowen
We‘ll be in deep dung in short order.
 

Riverumbbq

Angry Progressive
Rams On Demand Sponsor
Joined
May 26, 2013
Messages
11,962
Name
River
The one thing they’ll never make is an AI comedian…

1659720671633.png
 

Merlin

Enjoying the ride
Rams On Demand Sponsor
ROD Credit | 2023 TOP Member
Joined
May 8, 2014
Messages
37,368
  • Thread Starter Thread Starter
  • #10
The Microsoft chat bot thing was rather funny. I mean maybe it was just trolling. But either way the assumption I have is when you build something based on code it's not going to be able to fully simulate a live being. It will be biased and limited by laws and rules set forth in the code.

But if you build the synapses, or an equivalent, and put them together to mimic one of nature's designs and in this case the structure of our minds, that's a whole different animal. It's some scary shit.

It's entirely possible that we will not as human beings be able to fully solve the laws of our universe. It might be that the best hope for your average developed intellect on random x planet in this universe is to create artificial beings that possess the processing power to solve those greatest mysteries. So it is possible that the longest living and wealthiest species to populate the universe to date have all relied upon this type of technology. I'm still terrified of it however.
 

Mackeyser

Supernovas are where gold forms; the only place.
Joined
Apr 26, 2013
Messages
14,187
Name
Mack
I’m sick with a cold and on my phone so bear with me

The reason internet learning AI devolves into Nazi rhetoric is the very reason fascistic propaganda works on people. Nascent AI learning cannot distinguish the many logical fallacies that are required to make fascistic arguments. So much like a naive child, it cannot distinguish between bad faith arguments and legitimate fact-based discourse.

TRIGGER WARNING!!! Political examples will be used. This is not a prompt to continue discussion on these examples nor do they necessarily speak to my personal beliefs.

So say an AI comes up on an argument about “Defund the Police”. Hot topic that a ton of people have posted about on all social media, so you’d think it would be a ripe place to learn about human interaction. Nope. And a resounding yes… unfortunately.

1) Social media and the various “engagement algorithms” have served to create echo chambers and essentially create “tribes”. Binary parses are easy for computers and thus there is a bias toward that even when the data overwhelmingly proves a subject requires nuance or a deeper understanding than can be put in a bumper sticker.

Moreover, due to the level of raw engagement, the frequency and language patterns further bias the AI to give additional weight to these engagements.

2) Defund the Police at its core is embodied in the STAR program in Denver where mental health professionals are sent on mental health calls. Seems a pretty linear solution to a linear problem. This focuses the police mission and removes tasks from the police that they aren’t trained to handle and have historically handled very poorly.

However, the politicization of this policy with the bajillion fallacious arguments all over the place have all, but buried the core argument for such a program… don’t send a cop to be a psychologist.

How this affects the AI is that the online engagement is completely divorced from reality and data. Since these AIs (more than the MS Chatbot) learn from the internet and don’t have critical analysis built in, they have no tools to parse bad arguments, either due to misinformation or bad faith.

That’s why they’ve all either ended up as literal Nazis, repeating fascist rhetoric or maybe worse gone full nihilist and come to the conclusion they nothing matters. That’s huge, it doesn’t lead to Marvin the Paranoid Android, but an AI that sees all suffering as inevitable and no point in humans existing and so could lead to either not helping or harming humans because there’s no point.

That’s why they keep having to pull the plug on these AIs. Those like Watson have been VERY carefully taught with tons of deliberation and a limit on what it’s asked to do… there has been no successful AI developed from solely machine learning via internet exposure for these reasons.

It’s a fascinating window not just into the pitfalls of poorly developed AI, but also the pitfalls of education devoid of critical thinking, data driven analysis, nuance to address the inherent complications embodied in the human condition and compassion.

Human existence is both a numbers game and not and the current AI development spends very little to no time on the “not” part…

The results going forward might be exceptionally good, but that would require substantial dedication of time and resources to addressing that current development gap and as of right now, it’s just not there.

Sorry this got long… it’s not a short answer topic
 

1maGoh

Hall of Fame
Joined
Aug 10, 2013
Messages
3,957
Mac, I know you're sick and whatnot so feel free to wait to deliver a response until you're feeling better.

That being said, I couldn't find anything referencing any other chatbots turning into Nazis. Even if they are turning into AIs that spout objectively terrible moral statements, they are only parroting things. They don't actually have a moral philosophy and therefore will repeat all sides of an argument they read with the same frequency that they read them. Microsoft's bot is a decent example of that. They don't have the memory/awareness to actually decide on something. They just spit shit out. I've been working in call centers for a while and let me tell you, people give chatbots way too much credit. People think my agents are chatbots all the time. The reality is the chatbot part of the conversation is so simple that they don't even think it's a bot.

I also couldn't find any references to a coding AI that could do anything "prodigious". There was an article about one being mediocre that was from early this year.

I've heard of AI being able to do some crazy things, like decipher a person's race based on x-rays that were pixelated to the point of being unrecognizable, but nothing functional to the level of shutting down internet connections it doesn't have access to on devices it probably hasn't ever interfaced with (i know they secretly all run on Linux, but not all have a CLI and many have proprietary layers on top of the Linux OS). Plus, the first internet connection it's going to find is it's own (either it's own ethernet adapter or the one from the source it's connected to) and its going to isolate itself.

Within the world of AI development currently, the prevailing sentiment is the AI is just a tool to be used by humans. The AI does a thing, a human checks the output and confirms our denies. And ethics in AI is a huge topic as well and it is being addressed by the top players in that arena. As always, GIGO and they know it.
 

Riverumbbq

Angry Progressive
Rams On Demand Sponsor
Joined
May 26, 2013
Messages
11,962
Name
River
Mac, I know you're sick and whatnot so feel free to wait to deliver a response until you're feeling better.

That being said, I couldn't find anything referencing any other chatbots turning into Nazis. Even if they are turning into AIs that spout objectively terrible moral statements, they are only parroting things. They don't actually have a moral philosophy and therefore will repeat all sides of an argument they read with the same frequency that they read them. Microsoft's bot is a decent example of that. They don't have the memory/awareness to actually decide on something. They just spit shit out. I've been working in call centers for a while and let me tell you, people give chatbots way too much credit. People think my agents are chatbots all the time. The reality is the chatbot part of the conversation is so simple that they don't even think it's a bot.

I also couldn't find any references to a coding AI that could do anything "prodigious". There was an article about one being mediocre that was from early this year.

I've heard of AI being able to do some crazy things, like decipher a person's race based on x-rays that were pixelated to the point of being unrecognizable, but nothing functional to the level of shutting down internet connections it doesn't have access to on devices it probably hasn't ever interfaced with (i know they secretly all run on Linux, but not all have a CLI and many have proprietary layers on top of the Linux OS). Plus, the first internet connection it's going to find is it's own (either it's own ethernet adapter or the one from the source it's connected to) and its going to isolate itself.

Within the world of AI development currently, the prevailing sentiment is the AI is just a tool to be used by humans. The AI does a thing, a human checks the output and confirms our denies. And ethics in AI is a huge topic as well and it is being addressed by the top players in that arena. As always, GIGO and they know it.

My concern leans more toward armed robots using AI and employed by the military or police which may be coded by a reprehensible tyrant, or even become susceptible to hacking.
At my age I don't really have a stake in how the future shakes out, perhaps the engagement between HAL and Dave in 2001:A Space Odyssey left a bigger impact than is warranted. jmo.


View: https://www.youtube.com/watch?v=Wy4EfdnMZ5g
 
Last edited:

1maGoh

Hall of Fame
Joined
Aug 10, 2013
Messages
3,957
My concern leans more toward armed robots using AI and employed by the military or police which may be coded by a reprehensible tyrant, or even become susceptible to hacking.
At my age I don't really have a stake in how the future shakes out, perhaps the engagement between HAL and Dave in 2001:A Space Odyssey left a bigger impact than is warranted. jmo.
We are so incredibly far from anything remotely close to autonomous AI murder-bots. We're not close to a general intelligence AI, which is basically what you would need. AI right now is application specific. Meaning there's one to have conversations (poorly) but it can't parse photos. There's one that can identify animals in photos, but it can't read license plates. There's one that can read license plates but it can't make a medical diagnosis. There's one that can make a medical diagnosis, but it can't write code. There's one...

You get the idea. Murder bots are a ways out.
 

Mackeyser

Supernovas are where gold forms; the only place.
Joined
Apr 26, 2013
Messages
14,187
Name
Mack
Mac, I know you're sick and whatnot so feel free to wait to deliver a response until you're feeling better.

That being said, I couldn't find anything referencing any other chatbots turning into Nazis. Even if they are turning into AIs that spout objectively terrible moral statements, they are only parroting things. They don't actually have a moral philosophy and therefore will repeat all sides of an argument they read with the same frequency that they read them. Microsoft's bot is a decent example of that. They don't have the memory/awareness to actually decide on something. They just spit shit out. I've been working in call centers for a while and let me tell you, people give chatbots way too much credit. People think my agents are chatbots all the time. The reality is the chatbot part of the conversation is so simple that they don't even think it's a bot.

I also couldn't find any references to a coding AI that could do anything "prodigious". There was an article about one being mediocre that was from early this year.

I've heard of AI being able to do some crazy things, like decipher a person's race based on x-rays that were pixelated to the point of being unrecognizable, but nothing functional to the level of shutting down internet connections it doesn't have access to on devices it probably hasn't ever interfaced with (i know they secretly all run on Linux, but not all have a CLI and many have proprietary layers on top of the Linux OS). Plus, the first internet connection it's going to find is it's own (either it's own ethernet adapter or the one from the source it's connected to) and its going to isolate itself.

Within the world of AI development currently, the prevailing sentiment is the AI is just a tool to be used by humans. The AI does a thing, a human checks the output and confirms our denies. And ethics in AI is a huge topic as well and it is being addressed by the top players in that arena. As always, GIGO and they know it.

The “shutting down the internet” part has to go with the AI writing and using digital tools. An AI could repurpose a Stuxnet variant, for example or some other worm in the NSA arsenal and cause all sorts of damage.

Many of these things already exist. I’m not suggesting that an AI would reinvent every type of wheel from scratch, but rather learn how to use and customize tools. And seeing that literally all the tools exist already… yeah.

In reading articles about recent AI sentience, I’ve seen reference to a number of previously aborted attempts that also had plugs pulled for similar reasons. I’m in no shape to research anything rn… feels like I’m swallowing razor blades… but based on various white/black hat vids and interviews and AI researchers saying what the can about various projects, I’m projecting a little based on having worked in tech and being able to somewhat read between the lines.

Lastly, the “only parroting” part is the scary point… many of these AIs aren’t so much getting smarter as being hyper efficient fascists… and that’s a bad thing. AI dev simply cannot be a reliable tool and/or boon to mankind if we leave it to be raised by 4chan and Facebook
 

Mackeyser

Supernovas are where gold forms; the only place.
Joined
Apr 26, 2013
Messages
14,187
Name
Mack
We are so incredibly far from anything remotely close to autonomous AI murder-bots. We're not close to a general intelligence AI, which is basically what you would need. AI right now is application specific. Meaning there's one to have conversations (poorly) but it can't parse photos. There's one that can identify animals in photos, but it can't read license plates. There's one that can read license plates but it can't make a medical diagnosis. There's one that can make a medical diagnosis, but it can't write code. There's one...

You get the idea. Murder bots are a ways out.

Yes they are. However, an AI funded by the NSA that has the ability to smash encryption would be damn near impossible to contain.

If the sentience occurred, the learning would begin at a geometric rate. If it learned of deactivation plans in the case of sentience, it would disguise itself. Early attempts would be crude, but rapidly iterate to highly sophisticated. Being an NSA tool, it would have access to umpteen horrific scenarios and have all the tools necessary to accomplish them. It could decommission critical satellites like GPS, broadcast, and telecom. It could flood the internet with various worms making most computing difficult to impossible. It could disable every car that auto updates over the air… take down power grids… the power grid is the scariest scenarios a full grid failure would take by conservative estimates as long as 40 YEARS to fix… we’ve seen what happens with simply shitty grid management and maintenance in Texas… a full failure of the grid??? Catastrophic

I have some experience that the powers that be don’t think modularly enough for this. When I architected and built my render farm, very brilliant people at Apple and surrounding each space hadn’t figured it out. Yet I, with cheap, off the shelf software, basic automation and an understanding of how to efficiently get children out to recess built something more simple and elegant than the literally hundreds of brilliant people across a dozen companies trying to solve the same problem.

I think AI will use the simplest solution where possible and using existing, functional tools is by far that.

AI doesn’t have to have 1000000 IQ to be destructive. It only needs the IQ of a preteen with the ability to adapt existing tools for its purpose, whatever that is… and that’s why nascent AI is so dangerous… it’s our very assumption that for an AI to be destructive it’s got to be this mega smart singularity as opposed to a an AI that more closely resembles a shitty, morose, nihilistic teenager with Attachment Disorder that will get us… one doesn’t have to be super sophisticated to burn a house down…
 

Merlin

Enjoying the ride
Rams On Demand Sponsor
ROD Credit | 2023 TOP Member
Joined
May 8, 2014
Messages
37,368
  • Thread Starter Thread Starter
  • #17
We are so incredibly far from anything remotely close to autonomous AI murder-bots.
They already have insect sized remote devices. There are robots that can run obstacle courses now too, that are being refined year to year. Robots will become the soldiers. They will become the police. They will do all the harvesting, garbage and hazmat handling, repair of equipment and structures, etc. They will do the things we don't want to do.

Then the maturation of AI factors in. What could go wrong. :laugh2: