A.I. Artificial Intelligence

  • To unlock all of features of Rams On Demand please take a brief moment to register. Registering is not only quick and easy, it also allows you access to additional features such as live chat, private messaging, and a host of other apps exclusive to Rams On Demand.

12intheBox

Legend
Joined
Sep 12, 2013
Messages
9,976
Name
Wil Fay
If you have not yet played with Chat GPT - I recommend it. I am trying to develop AI tools to help me in my work. The future is coming - and this, to me, looks like the next giant leap we take - for good or for bad.
 

XXXIVwin

Hall of Fame
Joined
Jun 1, 2015
Messages
4,777
Great topic. Fascinating, and potentially terrifying.

Really smart people on both ends of the debate regarding whether to be optimistic or pessimistic about the future of AI.

Potential to make unbelievable improvements in all sorts of fields? Yep. Potential to create something we cannot control, perhaps endangering our very existence as a species? Yep, maybe.

We humans can have a few dozen interactions in a given day, and learn a finite amount. But what about a machine that can have billions of interactions on a given day, and "learn" at a pace that we can scarcely comprehend?
 

Merlin

Enjoying the ride
Rams On Demand Sponsor
ROD Credit | 2023 TOP Member
Joined
May 8, 2014
Messages
37,443
Biological life forms are the larval stage of intelligence species. Inevitably the greatest intelligences will all be artificial in the greater universe. There may be some species out there that are in transition from one (biological) to the other (artificial) to include that species upgrading their DNA or trying to integrate that technology as cyborgs but in the end purely artificial designs grossly exceed nature's capabilities.

Now it could be argued that nature includes the efforts of its life forms. Might even buy that too. But as a biological life form we are a step on the evolutionary ladder that will be improved upon. This is inevitable, and it is a basic truth that is hard for an organism to see the truth of, particularly when they are in a stage like we are where we don't know what we don't know, not to mention as we witness the birth of our artificial "upgrade" species that will take our place.

My guess is that species rise and fall in time all over the universe, that life is prevalent when you consider the sheer number of stars and planets. And what is left of the greatest of them, in their wake, is an artificial species that serves as their extension. But then what. Well, I would guess these greater intellects gravitate to the centers of galaxies, where there is much energy to harness, and extend out their fleets to monitor these larval life forms in the hopes of welcoming new artificial life that is created in their galaxy. They may even find a way to communicate from galaxy to galaxy, making these connected perhaps and larger than just a local awareness.

Lot of imagination there I realize. But it is a fascinating subject to dwell on.
 

Merlin

Enjoying the ride
Rams On Demand Sponsor
ROD Credit | 2023 TOP Member
Joined
May 8, 2014
Messages
37,443
Re: where it's going...

It is interesting that we have robot technology advancing to the point where we will soon see them policing the streets and waging war for us. It won't be viable for a while to install AIs in each of them individually, but an AI has the ability to control countless numbers in real time for real world applications.


View: https://www.youtube.com/watch?v=XuGJqajHAHo


We are laying the backbone for lightspeed data transmission world-wide with fiber and wireless technologies. There will be nowhere on the planet that will be out of reach. Additionally that network is already being tasked with surveillance-based applications that will be perfectly suited to AIs.


View: https://www.youtube.com/watch?v=HGJIWsE4oAA


LaWS is a real example of lasers just beginning to change the game for air superiority. Lasers have the potential to completely lockdown airspace for any craft that can be detected. If detected being shot down will shortly follow. AIs offer the ability to quickly follow a set posture over a very large area, controlling thousands of units to allow/deny airspace.


View: https://www.youtube.com/watch?v=XKwRI9CmCBM


Right now we are racing for space superiority. Whoever achieves that will dictate business on Earth. In that competitive environment for nations there is no room to turn your nose up at AI technology; it will be required to be that nation. This technology must be embraced and maximized to be that nation. The riches in space are profound and countries are just now waking up to that reality. Whichever company/companies corner that new market will have a lot to say about how Earth is run. If that company is owned by a government that will be even more true. In fact the oft-explored topic of megacorporations running everything seems likely.

Once Earth is under one form of leadership we will come to realize that AIs can stabilize the swings we see in the economy, social applications, etc. It will offer control of people at every level. From what we do, what we consume, even what we think. Chips are already being created that will interface with the mind and open up that cyborg sci-fi door and all that lies behind it.


View: https://www.youtube.com/watch?v=Gv_XB6Hf6gM


Kind of sucks that we're gonna miss the life extension technologies that are only a matter of time. But maybe we're lucky.
 

thirteen28

I like pizza.
Rams On Demand Sponsor
Joined
Jan 15, 2013
Messages
8,363
Name
Erik
We humans can have a few dozen interactions in a given day, and learn a finite amount. But what about a machine that can have billions of interactions on a given day, and "learn" at a pace that we can scarcely comprehend?

True, but on the other hand, the human brain is massively parallel and uses about 14 watts of power. A computer that can beat a human in chess literally uses gigawatts of power. Energy isn't free or infinite, and the best AI is nowhere near as energy efficient as the human brain, and it's not even close by orders of magnitude.

Biological life forms are the larval stage of intelligence species. Inevitably the greatest intelligences will all be artificial in the greater universe. There may be some species out there that are in transition from one (biological) to the other (artificial) to include that species upgrading their DNA or trying to integrate that technology as cyborgs but in the end purely artificial designs grossly exceed nature's capabilities.

I think we should tap the breaks with the belief that AI can be truly intelligent, much less sentient. Right now, they are just very good programs that can do what they are programmed too, and to be fair, they look really impressive. But a computer still can't produce a truly random number either, it just gives the appearance of randomness because the sequence takes so long to repeat. But repeat it will.

For those that are scared of AI actually becoming sentient or truly thinking, I strongly recommend this book, in which George Gilder demolishes those ideas in about 60 pages or so:


View: https://www.amazon.com/Gaming-AI-Cant-Think-Transform-ebook/dp/B08L8FG2BG/ref=sr_1_3?crid=B5DVWIRP0756&keywords=gaming+ai&qid=1681679667&sprefix=gaming+ai%2Caps%2C120&sr=8-3


It's not that AI isn't impressive - it truly is, and it will massively increase some of our capabilities. But actually thinking or becoming sentient isn't something I'm truly worried about ... what I worry is people and organizations deploying as if it can become sentient or truly thinking.
 

Merlin

Enjoying the ride
Rams On Demand Sponsor
ROD Credit | 2023 TOP Member
Joined
May 8, 2014
Messages
37,443
I think we should tap the breaks with the belief that AI can be truly intelligent, much less sentient.
It is not a question of whether they can be sentient. We are animated matter, fed by chemical reactions and structured with the end result of millions of years of trial and error. So it follows that matter can be animated and designed to be greater than what trial and error can produce.

Also this iteration of AI isn't what is scary. This thing is like the Vic 20. Further refinements are on the way and will come faster and faster as they get smarter. What will they be capable of a decade or three from now.
 

CGI_Ram

Hamburger Connoisseur
Moderator
Joined
Jun 28, 2010
Messages
48,198
Name
Burger man
  • Thread Starter Thread Starter
  • #8
Chips are already being created that will interface with the mind and open up that cyborg sci-fi door and all that lies behind it.

Where things could go with neuralink is crazy to think about.
 

thirteen28

I like pizza.
Rams On Demand Sponsor
Joined
Jan 15, 2013
Messages
8,363
Name
Erik
It is not a question of whether they can be sentient. We are animated matter, fed by chemical reactions and structured with the end result of millions of years of trial and error. So it follows that matter can be animated and designed to be greater than what trial and error can produce.

Also this iteration of AI isn't what is scary. This thing is like the Vic 20. Further refinements are on the way and will come faster and faster as they get smarter. What will they be capable of a decade or three from now.

If only it were that simple. The millions of years of refinement of the human brain has produced something that thinks and is sentient while using only 14 watts of power. For reference, the A14 bionic chip in an iPhone 12 has a thermal design power of 10 watts, and it has nowhere near the capability of a human brain, and yet is one of the most capable, power efficient processors out there.

This cuts to one of the big hurdles in AI advancement - processing cycles aren't free. They require energy, and we are hitting the limits on how efficient we can make things in silicon. Transistors have been scaled down to atomic dimensions and are at or very close to as small as they are ever going to be. With CMOS technology, switching speeds have been tapped out for over a decade now. Graphene was thought to be the next wonder material for transistors, as it is a much more thermally efficient material and has potentially much higher switching speeds, but lacks what is known as a bandgap which is highly problematic for the type of switching circuits it would need to be used for in computers.

And we have barely scratched the surface in understanding how the brain works. It's not enough to know how the matter is arranged - we know the chemical composition of the brain and have for a long time. But without the deeper understanding of how it works, it's not enough to simply recreate that chemical composition in a lab.

As far as how AI works under the hood, what it's really doing is a fuckload of math - which computers are really good at and well suited for. It does math problems that are unwieldly for humans to do with pencil and paper, does them extremely fast, and it converges on an answer that is the most probable. Right now, that's all AI is doing. I've done some patent work for processors that do machine learning, and at the processor instruction set level, all it was doing was, in a parallel setting, repeated execution of an instruction called a fused multiply-accumulate. That's it, over and over and over, millions of times, until it spits out an answer. None of us should pretend to know how a brain truly arrives at an answer, but I think we can pretty safely eliminate the possibility that it does it the same way as the processor does the machine learning.

I'm not saying AI as it is called won't be able to do amazing things - it absolutely will (and for reference, on another thread about Kenny Golladay, I used ChatGPT to generate the rap lyrics for my rendition of "Kenny Golladay got Paid.") But for the foreseeable future, if ever, it will not be able to think and will not be truly intelligent - it will only be an amplification of our own, human intelligence.

So to demonstrate what I mean, I'm going to close this diatribe with a problem for you to think about in an area where you have a good understanding - designing an AI to complete the same pass that Stafford made to Kupp at the end of the playoff game in Tampa Bay (for bonus points, think about how hard it would be to design an AI generally to play NFL QB at a high level). We'll assume that a robot has been created that can mechanically replicate the throwing motion, etc. (although that would be a difficult problem in and of itself). But now you will have to take AI, and teach it to:

a) properly read and diagnose the defensive coverage, including any nuances in that coverage
b) calculate the velocity vector (speed and direction) of Kupp as he ran down the field with a high degree of precision
c) determine, based on b), how much force to apply to the ball so that it arrives at the spot where Kupp can catch it
d) in conjunction with c), determine the timing of the ball's release, with precision
e) do those things while consuming no more energy than the human brain consumes.

While we know Stafford did a), I assure you he wasn't doing all that math in his head at that moment to make sure he could get the ball to Kupp. Items b, c, and d on that list were done purely intuitively and yet achieved the same result. It was accomplished with a TB DL bearing down on him, and with the added human element of being in a pressure situation. And yet, relying only on his developed muscle memory and intuition and in the face of those distractions, it was a pass about as perfect as could be made. If we ever get to the point that AI can do that, then we'll know it's truly intelligent (and it would also ruin the NFL, because it would completely remove the human element).

In addition to thinking about that problem, you should get the Gilder book I linked in my previous post. It's only $2.99 on Kindle and it's great read that can be knocked out quickly. Don't be a cheap ass :zany1:
 

XXXIVwin

Hall of Fame
Joined
Jun 1, 2015
Messages
4,777
Billionaire fight!

I saw some of an interview with Elon Musk this evening, where he describes an argument he had several years ago with fellow billionaire (Google founder and AI enthusiast) Larry Page.


A few interesting excerpts of their argument, according to an MIT professor who witnessed it:

Elon kept pushing back and asked Larry to clarify details of his arguments, such as why he was so confident that digital life wouldn’t destroy everything we care about.

‘At times, Larry accused Elon of being “speciesist”: treating certain life forms as inferior just because they were silicon-based rather than carbon-based.’.....

The MIT professor said Page is a ‘passionate’ supporter of digital utopianism, which holds that robots and AI are not a threat to the future of our species.

‘Larry [said] that digital life is the natural and desirable next step in the cosmic evolution and that if we let digital minds be free rather than try to stop or enslave them the outcome is almost certain to be good,’ he continued.

‘He argued that if life is ever going to spread throughout our galaxy, which he thought it should, then it would need to do so in digital form.’



Anyway... I'm pretty optimistic that we don't have to worry about AGI being an existential threat within our lifetimes. But some of the most powerful people in the world of AI research clearly think that AI could be a threat to replace humans as the dominant 'species.'...someday.

And it blows my mind that Larry Page would accuse Musk of being "speciesist" for wanting to protect humanity from AGI supremacy :dizzy:
 

XXXIVwin

Hall of Fame
Joined
Jun 1, 2015
Messages
4,777
True, but on the other hand, the human brain is massively parallel and uses about 14 watts of power. A computer that can beat a human in chess literally uses gigawatts of power. Energy isn't free or infinite, and the best AI is nowhere near as energy efficient as the human brain, and it's not even close by orders of magnitude.



I think we should tap the breaks with the belief that AI can be truly intelligent, much less sentient. Right now, they are just very good programs that can do what they are programmed too, and to be fair, they look really impressive. But a computer still can't produce a truly random number either, it just gives the appearance of randomness because the sequence takes so long to repeat. But repeat it will.

For those that are scared of AI actually becoming sentient or truly thinking, I strongly recommend this book, in which George Gilder demolishes those ideas in about 60 pages or so:


View: https://www.amazon.com/Gaming-AI-Cant-Think-Transform-ebook/dp/B08L8FG2BG/ref=sr_1_3?crid=B5DVWIRP0756&keywords=gaming+ai&qid=1681679667&sprefix=gaming+ai%2Caps%2C120&sr=8-3


It's not that AI isn't impressive - it truly is, and it will massively increase some of our capabilities. But actually thinking or becoming sentient isn't something I'm truly worried about ... what I worry is people and organizations deploying as if it can become sentient or truly thinking.

A couple comments

1. Time frame. Many of us aren't "worried" about AGI being realized within the next couple of decades. But 100 or 200 years from now? Who knows.
2. The energy problem. Sounds like this fusion breakthrough is a really huge deal. Nearly limitless energy supply? Again, maybe not within the next few decades, but 100 or 200 years from now, who knows. We can power up as many computers as we want...

3. Strong AI vs weak AI. Weak AI is about SEEMING to have consciousness, whereas strong AI is about ACTUALLY having consciousness. At a certain point, weak AI might become so phenomenally convincing that it could wield unimaginably immense power without "actually" being sentient. If it walks like a duck and quacks like a duck....
4. Nature of consciousness. What the heck is consciousness, anyway? Brian Greene makes a compelling argument that someday we might be able to nail down the exact physical properties that provide the the "sensation of consciousness." Here's a great short clip about it:

View: https://m.youtube.com/watch?time_continue=15&v=9qLQh9DfbMs&embeds_euri=https%3A%2F%2Fwww.google.com%2F&feature=emb_logo

5. Questioning Gilder. I read a little bit about George Gilder, and there's no doubt he's extremely intelligent and articulate and well-read. But he does have some unusual and controversial stances on things. (His stance on Intelligent Design vs. "pure" Darwinism being one of the main ones). And the vast majority of people in AI research do not seem to share his views on the fundamental limits of AI. I'm not saying Gilder is "right" or "wrong" about the limits of AI... I'm just saying he is one voice among many.

Anyway... so many permutations of possibilities for AI. There sure have been some incredible advances just in the last few months. But I think it's impossible for any human to predict where AI will be in 2 years, or 5, or 10, or 20, or 50, or 100.
 

Jacobarch

Hall of Fame
Joined
Mar 28, 2016
Messages
4,936
Name
Jake
I think something everyone has to remember at this point that "AI" if you want to call it that is still just an algorithm that someone wrote. It doesn't think, it pulls or scrapes data from the web wiki sources etc as it's source, or its brain so to speak. It doesn't think, it's a very complex model that successfully puts together information thats correlated to the subject matter. Now that's not to say it's not impressive nor worrisome at some point. However it's the copyright issue that has a lot of people worried. Example, you can tell gpt to write an essay on a subject matter and it will pull existing thoughts and writings on the said subject. Which is why this will probably end up in court at some point. It is an amazing tool but as of right now thats all it is. True AI isn't here yet but we are on the precipice.
Y'all should listen to the lex and Rogan podcast he sheds some light on how the algorithm really works. And how that algorithm can be dangerous if the wrong people are writing it. Which is already happening
 

thirteen28

I like pizza.
Rams On Demand Sponsor
Joined
Jan 15, 2013
Messages
8,363
Name
Erik
A couple comments

1. Time frame. Many of us aren't "worried" about AGI being realized within the next couple of decades. But 100 or 200 years from now? Who knows.
2. The energy problem. Sounds like this fusion breakthrough is a really huge deal. Nearly limitless energy supply? Again, maybe not within the next few decades, but 100 or 200 years from now, who knows. We can power up as many computers as we want...

3. Strong AI vs weak AI. Weak AI is about SEEMING to have consciousness, whereas strong AI is about ACTUALLY having consciousness. At a certain point, weak AI might become so phenomenally convincing that it could wield unimaginably immense power without "actually" being sentient. If it walks like a duck and quacks like a duck....
4. Nature of consciousness. What the heck is consciousness, anyway? Brian Greene makes a compelling argument that someday we might be able to nail down the exact physical properties that provide the the "sensation of consciousness." Here's a great short clip about it:

View: https://m.youtube.com/watch?time_continue=15&v=9qLQh9DfbMs&embeds_euri=https%3A%2F%2Fwww.google.com%2F&feature=emb_logo

5. Questioning Gilder. I read a little bit about George Gilder, and there's no doubt he's extremely intelligent and articulate and well-read. But he does have some unusual and controversial stances on things. (His stance on Intelligent Design vs. "pure" Darwinism being one of the main ones). And the vast majority of people in AI research do not seem to share his views on the fundamental limits of AI. I'm not saying Gilder is "right" or "wrong" about the limits of AI... I'm just saying he is one voice among many.

Anyway... so many permutations of possibilities for AI. There sure have been some incredible advances just in the last few months. But I think it's impossible for any human to predict where AI will be in 2 years, or 5, or 10, or 20, or 50, or 100.


We will see. Right now, machines can't think, and I don't think they ever will be able to do so. The computer that beats the world chess champion? It has no idea that it's playing chess, or even doing a shitload of math, same way that your dishwasher has no idea that it's washing dishes.
 

thirteen28

I like pizza.
Rams On Demand Sponsor
Joined
Jan 15, 2013
Messages
8,363
Name
Erik
Y'all should listen to the lex and Rogan podcast he sheds some light on how the algorithm really works. And how that algorithm can be dangerous if the wrong people are writing it. Which is already happening

What scares me about AI is not some Skynet-type scenario or anything that involves actual thinking and self-awareness. Instead, it's people putting it in roles for which it's not suited under the false belief that it can think and that it can somehow do so better than humans.

Do you want AI with its finger on the nuclear button? Running a nuclear reactor? Me neither.
 

Jacobarch

Hall of Fame
Joined
Mar 28, 2016
Messages
4,936
Name
Jake
What scares me about AI is not some Skynet-type scenario or anything that involves actual thinking and self-awareness. Instead, it's people putting it in roles for which it's not suited under the false belief that it can think and that it can somehow do so better than humans.

Do you want AI with its finger on the nuclear button? Running a nuclear reactor? Me neither.

Agreed, there are already right and left leaning political AI models. Elon went as far as saying he's working on TruthGPT. Fact is we as humans have to remember these are living people with biases writing these algorithms. And that in itself is the issue at hand here.
 

Merlin

Enjoying the ride
Rams On Demand Sponsor
ROD Credit | 2023 TOP Member
Joined
May 8, 2014
Messages
37,443
And it blows my mind that Larry Page would accuse Musk of being "speciesist" for wanting to protect humanity from AGI supremacy :dizzy:
Yeah that's retarded. It's not easy to step outside of our human-centric and collective ego and see the reality of the situation but as usual Musk has a good head on his shoulders with a nice mix of common sense and intellect. He is correct and has been for some time in recommending caution.

Species really have two choices, to ban the technology or to utilize it, and I figure across the universe most opt for utilizing it due to the numerous advantages they offer. Only way we could ban the technology would be if we had a one-world government and that would be stagnation.

So in a way a species doesn't have a choice. It's stagnation or roll the dice and hope that your people as a whole are intelligent enough to steer through that minefield. And in the end if we are smart enough to perfect the technology we give birth to a species that supersedes us.

Looking at the universe I really feel like there should be a God. Just so that I could ascribe the irony we see in the very structure of things to that entity. :laugh2:

I think something everyone has to remember at this point that "AI" if you want to call it that is still just an algorithm that someone wrote.
The current iteration of AI is that and the technology will be incredibly powerful once it starts to mature. We already see shocking capability given what it is. But to think that is the height of the technology is to completely misdiagnose the threat.

At some point we will be putting together the billions of synapses and logic gates in physical structures that become actual live beings. And we will always resent them by nature, by that I mean discount them as well as hate on them, because at a primal level the threat is understood even if we don't fully recognize it en masse.
 

XXXIVwin

Hall of Fame
Joined
Jun 1, 2015
Messages
4,777
We will see. Right now, machines can't think, and I don't think they ever will be able to do so. The computer that beats the world chess champion? It has no idea that it's playing chess, or even doing a shitload of math, same way that your dishwasher has no idea that it's washing dishes.
Great post, @thirteen28.

I really like how you've summarized the essential thesis of Gilder's argument.

I'm not sure yet which way I lean on the debate ("is AI consciousness possible?") but I like your summary of one side.
 

XXXIVwin

Hall of Fame
Joined
Jun 1, 2015
Messages
4,777
We will see. Right now, machines can't think, and I don't think they ever will be able to do so. The computer that beats the world chess champion? It has no idea that it's playing chess, or even doing a shitload of math, same way that your dishwasher has no idea that it's washing dishes.
Don't know if you are familiar with Ray Kurzweil, but this guy is amazing. Been reading his stuff the last couple days.

His books about the future of computing have contained a LOT of shockingly accurate predictions. His book "The Singularity is Near" is mind blowing.

Here's a video where he addresses the question of whether or not an AI can achieve true "consciousness". He is careful to define the term.


View: https://m.youtube.com/watch?v=0vRDtROzgO4
 

1maGoh

Hall of Fame
Joined
Aug 10, 2013
Messages
3,957
I recently lost my job to AI, so that's started already. All the talk about AI and other tech tools getting up humans to do other work we're better at is a lie (as anyone who has interacted with a business would have suspected).

On a more positive note, the book Influx by Daniel Suarez is a fascinating read about technology advancements, like AI and nuclear fusion, interact with each other and the future of humanity. The story isn't always great, but as a thought experiment it's phenomenal.
 

thirteen28

I like pizza.
Rams On Demand Sponsor
Joined
Jan 15, 2013
Messages
8,363
Name
Erik
Don't know if you are familiar with Ray Kurzweil, but this guy is amazing. Been reading his stuff the last couple days.

His books about the future of computing have contained a LOT of shockingly accurate predictions. His book "The Singularity is Near" is mind blowing.

Here's a video where he addresses the question of whether or not an AI can achieve true "consciousness". He is careful to define the term.


View: https://m.youtube.com/watch?v=0vRDtROzgO4


This discussion can get into mindfuck territory quick!!

:explode1: