A.I.
Re: A.I.
We are getting our first interactions as a firm of AI.
We sent a set of construction documents to a GC, they wanted to run the drawings through their AI drawings check system. 15 sheets of drawings muti-story building, many notes and details. We got one comment back.
"There is an inconsistency on detail 6/S11." All but one of our details was shown in 1/2"=1'-0" scale. The detail in question was in 1/4" scale because we needed to show more of the section. Thanks AI! It may get better, we will find out in the next couple of months if the drawings were error free or AI wasn't smart enough to see other stuff.
Our conference call system now has AI notes. It writes down everything everyone says, to its understanding. Clients have already said, " I really don't want everything that is said in meetings written down." DOD has questions about who is able to read this conversation, can the software be hacked?"
So the AI note taking "advancement" lasted less than a week and has now been disabled.
We sent a set of construction documents to a GC, they wanted to run the drawings through their AI drawings check system. 15 sheets of drawings muti-story building, many notes and details. We got one comment back.
"There is an inconsistency on detail 6/S11." All but one of our details was shown in 1/2"=1'-0" scale. The detail in question was in 1/4" scale because we needed to show more of the section. Thanks AI! It may get better, we will find out in the next couple of months if the drawings were error free or AI wasn't smart enough to see other stuff.
Our conference call system now has AI notes. It writes down everything everyone says, to its understanding. Clients have already said, " I really don't want everything that is said in meetings written down." DOD has questions about who is able to read this conversation, can the software be hacked?"
So the AI note taking "advancement" lasted less than a week and has now been disabled.
Nero is an angler in the lake of darkness
- KUTradition
- Contributor
- Posts: 15124
- Joined: Mon Jan 03, 2022 8:53 am
Re: A.I.
Have we fallen into a mesmerized state that makes us accept as inevitable that which is inferior or detrimental, as though having lost the will or the vision to demand that which is good?
Re: A.I.
The Next Great Leap in AI Is Behind Schedule and Crazy Expensive
OpenAI has run into problem after problem on its new artificial-intelligence project, code-named Orion.
OpenAI’s new artificial-intelligence project is behind schedule and running up huge bills. It isn’t clear when—or if—it’ll work. There may not be enough data in the world to make it smart enough.
The project, officially called GPT-5 and code-named Orion, has been in the works for more than 18 months and is intended to be a major advancement in the technology that powers ChatGPT. OpenAI’s closest partner and largest investor, Microsoft, had expected to see the new model around mid-2024, say people with knowledge of the matter.
OpenAI has conducted at least two large training runs, each of which entails months of crunching huge amounts of data, with the goal of making Orion smarter. Each time, new problems arose and the software fell short of the results researchers were hoping for, people close to the project say.
At best, they say, Orion performs better than OpenAI’s current offerings, but hasn’t advanced enough to justify the enormous cost of keeping the new model running. A six-month training run can cost around half a billion dollars in computing costs alone, based on public and private estimates of various aspects of the training.
OpenAI and its brash chief executive, Sam Altman, sent shock waves through Silicon Valley with ChatGPT’s launch two years ago. AI promised to continually exhibit dramatic improvements and permeate nearly all aspects of our lives. Tech giants could spend $1 trillion on AI projects in the coming years, analysts predict.
The weight of those expectations falls mostly on OpenAI, the company at ground zero of the AI boom.
The $157 billion valuation investors gave OpenAI in October is premised in large part on Altman’s prediction that GPT-5 will represent a “significant leap forward” in all kinds of subjects and tasks.
GPT-5 is supposed to unlock new scientific discoveries as well as accomplish routine human tasks like booking appointments or flights. Researchers hope it will make fewer mistakes than today’s AI, or at least acknowledge doubt—something of a challenge for the current models, which can produce errors with apparent confidence, known as hallucinations.
AI chatbots run on underlying technology known as a large language model, or LLM. Consumers, businesses and governments already rely on them for everything from writing computer code to spiffing up marketing copy and planning parties. OpenAI’s is called GPT-4, the fourth LLM the company has developed since its 2015 founding.
While GPT-4 acted like a smart high-schooler, the eventual GPT-5 would effectively have a Ph.D. in some tasks, a former OpenAI executive said. Earlier this year, Altman told students in a talk at Stanford University that OpenAI could say with “a high degree of scientific certainty” that GPT-5 would be much smarter than the current model.
There are no set criteria for determining when a model has become smart enough to be designated GPT-5. OpenAI can test its LLMs in areas like math and coding. It’s up to company executives to decide whether the model is smart enough to be called GPT-5 based in large part on gut feelings or, as many technologists say, “vibes.”
So far, the vibes are off.
OpenAI and Microsoft declined to comment for this article. In November, Altman said the startup wouldn’t release anything called GPT-5 in 2024.
Training day
From the moment GPT-4 came out in March 2023, OpenAI has been working on GPT-5.
Longtime AI researchers say developing systems like LLMs is as much art as science. The most respected AI scientists in the world are celebrated for their intuition about how to get better results.
Models are tested during training runs, a sustained period when the model can be fed trillions of word fragments known as tokens. A large training run can take several months in a data center with tens of thousands of expensive and coveted computer chips, typically from Nvidia.
During a training run, researchers hunch over their computers for several weeks or even months, and try to feed much of the world’s knowledge into an AI system using some of the most expensive hardware in far-flung data centers.
Altman has said training GPT-4 cost more than $100 million. Future AI models are expected to push past $1 billion. A failed training run is like a space rocket exploding in the sky shortly after launch.
Researchers try to minimize the odds of such a failure by conducting their experiments on a smaller scale—doing a trial run before the real thing.
From the start, there were problems with plans for GPT-5.
In mid-2023, OpenAI started a training run that doubled as a test for a proposed new design for Orion. But the process was sluggish, signaling that a larger training run would likely take an incredibly long time, which would in turn make it outrageously expensive. And the results of the project, dubbed Arrakis, indicated that creating GPT-5 wouldn’t go as smoothly as hoped.
OpenAI researchers decided to make some technical tweaks to strengthen Orion. They also concluded they needed more diverse, high-quality data. The public internet didn’t have enough, they felt.
Generally, AI models become more capable the more data they gobble up. For LLMs, that data is primarily from books, academic publications and other well-respected sources. This material helps LLMs express themselves more clearly and handle a wide range of tasks.
For its prior models, OpenAI used data scraped from the internet: news articles, social-media posts and scientific papers.
To make Orion smarter, OpenAI needs to make it larger. That means it needs even more data, but there isn’t enough.
“It gets really expensive and it becomes hard to find more equivalently high-quality data,” said Ari Morcos, CEO of DatologyAI, a startup that builds tools to improve data selection. Morcos is building models with less—but much better—data, an approach he argues will make today’s AI systems more capable than the strategy embraced by all top AI firms like OpenAI.
OpenAI’s solution was to create data from scratch.
It is hiring people to write fresh software code or solve math problems for Orion to learn from. The workers, some of whom are software engineers and mathematicians, also share explanations for their work with Orion.
Many researchers think code, the language of software, can help LLMs work through problems they haven’t already seen.
Having people explain their thinking deepens the value of the newly created data. It’s more language for the LLM to absorb; it’s also a map for how the model might solve similar problems in the future.
“We’re transferring human intelligence from human minds into machine minds,” said Jonathan Siddharth, CEO and co-founder of Turing, an AI-infrastructure company that works with OpenAI, Meta and others.
In AI training, Turing executives said, a software engineer might be prompted to write a program that efficiently solves a complex logic problem. A mathematician might have to calculate the maximum height of a pyramid constructed out of one million basketballs. The answers—and, more important, how to reach them—are then incorporated into the AI training materials.
OpenAI has worked with experts in subjects like theoretical physics, to explain how they would approach some of the toughest problems in their field. This can also help Orion get smarter.
The process is painfully slow. GPT-4 was trained on an estimated 13 trillion tokens. A thousand people writing 5,000 words a day would take months to produce a billion tokens.
OpenAI also started developing what is called synthetic data, or data created by AI, to help train Orion. The feedback loop of AI creating data for AI can often cause malfunctions or result in nonsensical answers, research has shown.
Scientists at OpenAI think they can avoid those problems by using data generated by another of its AI models, called o1, people familiar with the matter said.
OpenAI’s already-difficult task has been complicated by internal turmoil and near-constant attempts by rivals to poach its top researchers, sometimes by offering them millions of dollars.
Last year, Altman was abruptly fired by OpenAI’s board of directors, and some researchers wondered if the company would continue. Altman was quickly reinstated as CEO and set out to overhaul OpenAI’s governance structure.
More than two dozen key executives, researchers and longtime employees have left OpenAI this year, including co-founder and Chief Scientist Ilya Sutskever and Chief Technology Officer Mira Murati. This past Thursday, Alec Radford, a widely admired researcher who served as lead author on several of OpenAI’s scientific papers, announced his departure after about eight years at the company.
Reboot
[...]
OpenAI has run into problem after problem on its new artificial-intelligence project, code-named Orion.
OpenAI’s new artificial-intelligence project is behind schedule and running up huge bills. It isn’t clear when—or if—it’ll work. There may not be enough data in the world to make it smart enough.
The project, officially called GPT-5 and code-named Orion, has been in the works for more than 18 months and is intended to be a major advancement in the technology that powers ChatGPT. OpenAI’s closest partner and largest investor, Microsoft, had expected to see the new model around mid-2024, say people with knowledge of the matter.
OpenAI has conducted at least two large training runs, each of which entails months of crunching huge amounts of data, with the goal of making Orion smarter. Each time, new problems arose and the software fell short of the results researchers were hoping for, people close to the project say.
At best, they say, Orion performs better than OpenAI’s current offerings, but hasn’t advanced enough to justify the enormous cost of keeping the new model running. A six-month training run can cost around half a billion dollars in computing costs alone, based on public and private estimates of various aspects of the training.
OpenAI and its brash chief executive, Sam Altman, sent shock waves through Silicon Valley with ChatGPT’s launch two years ago. AI promised to continually exhibit dramatic improvements and permeate nearly all aspects of our lives. Tech giants could spend $1 trillion on AI projects in the coming years, analysts predict.
The weight of those expectations falls mostly on OpenAI, the company at ground zero of the AI boom.
The $157 billion valuation investors gave OpenAI in October is premised in large part on Altman’s prediction that GPT-5 will represent a “significant leap forward” in all kinds of subjects and tasks.
GPT-5 is supposed to unlock new scientific discoveries as well as accomplish routine human tasks like booking appointments or flights. Researchers hope it will make fewer mistakes than today’s AI, or at least acknowledge doubt—something of a challenge for the current models, which can produce errors with apparent confidence, known as hallucinations.
AI chatbots run on underlying technology known as a large language model, or LLM. Consumers, businesses and governments already rely on them for everything from writing computer code to spiffing up marketing copy and planning parties. OpenAI’s is called GPT-4, the fourth LLM the company has developed since its 2015 founding.
While GPT-4 acted like a smart high-schooler, the eventual GPT-5 would effectively have a Ph.D. in some tasks, a former OpenAI executive said. Earlier this year, Altman told students in a talk at Stanford University that OpenAI could say with “a high degree of scientific certainty” that GPT-5 would be much smarter than the current model.
There are no set criteria for determining when a model has become smart enough to be designated GPT-5. OpenAI can test its LLMs in areas like math and coding. It’s up to company executives to decide whether the model is smart enough to be called GPT-5 based in large part on gut feelings or, as many technologists say, “vibes.”
So far, the vibes are off.
OpenAI and Microsoft declined to comment for this article. In November, Altman said the startup wouldn’t release anything called GPT-5 in 2024.
Training day
From the moment GPT-4 came out in March 2023, OpenAI has been working on GPT-5.
Longtime AI researchers say developing systems like LLMs is as much art as science. The most respected AI scientists in the world are celebrated for their intuition about how to get better results.
Models are tested during training runs, a sustained period when the model can be fed trillions of word fragments known as tokens. A large training run can take several months in a data center with tens of thousands of expensive and coveted computer chips, typically from Nvidia.
During a training run, researchers hunch over their computers for several weeks or even months, and try to feed much of the world’s knowledge into an AI system using some of the most expensive hardware in far-flung data centers.
Altman has said training GPT-4 cost more than $100 million. Future AI models are expected to push past $1 billion. A failed training run is like a space rocket exploding in the sky shortly after launch.
Researchers try to minimize the odds of such a failure by conducting their experiments on a smaller scale—doing a trial run before the real thing.
From the start, there were problems with plans for GPT-5.
In mid-2023, OpenAI started a training run that doubled as a test for a proposed new design for Orion. But the process was sluggish, signaling that a larger training run would likely take an incredibly long time, which would in turn make it outrageously expensive. And the results of the project, dubbed Arrakis, indicated that creating GPT-5 wouldn’t go as smoothly as hoped.
OpenAI researchers decided to make some technical tweaks to strengthen Orion. They also concluded they needed more diverse, high-quality data. The public internet didn’t have enough, they felt.
Generally, AI models become more capable the more data they gobble up. For LLMs, that data is primarily from books, academic publications and other well-respected sources. This material helps LLMs express themselves more clearly and handle a wide range of tasks.
For its prior models, OpenAI used data scraped from the internet: news articles, social-media posts and scientific papers.
To make Orion smarter, OpenAI needs to make it larger. That means it needs even more data, but there isn’t enough.
“It gets really expensive and it becomes hard to find more equivalently high-quality data,” said Ari Morcos, CEO of DatologyAI, a startup that builds tools to improve data selection. Morcos is building models with less—but much better—data, an approach he argues will make today’s AI systems more capable than the strategy embraced by all top AI firms like OpenAI.
OpenAI’s solution was to create data from scratch.
It is hiring people to write fresh software code or solve math problems for Orion to learn from. The workers, some of whom are software engineers and mathematicians, also share explanations for their work with Orion.
Many researchers think code, the language of software, can help LLMs work through problems they haven’t already seen.
Having people explain their thinking deepens the value of the newly created data. It’s more language for the LLM to absorb; it’s also a map for how the model might solve similar problems in the future.
“We’re transferring human intelligence from human minds into machine minds,” said Jonathan Siddharth, CEO and co-founder of Turing, an AI-infrastructure company that works with OpenAI, Meta and others.
In AI training, Turing executives said, a software engineer might be prompted to write a program that efficiently solves a complex logic problem. A mathematician might have to calculate the maximum height of a pyramid constructed out of one million basketballs. The answers—and, more important, how to reach them—are then incorporated into the AI training materials.
OpenAI has worked with experts in subjects like theoretical physics, to explain how they would approach some of the toughest problems in their field. This can also help Orion get smarter.
The process is painfully slow. GPT-4 was trained on an estimated 13 trillion tokens. A thousand people writing 5,000 words a day would take months to produce a billion tokens.
OpenAI also started developing what is called synthetic data, or data created by AI, to help train Orion. The feedback loop of AI creating data for AI can often cause malfunctions or result in nonsensical answers, research has shown.
Scientists at OpenAI think they can avoid those problems by using data generated by another of its AI models, called o1, people familiar with the matter said.
OpenAI’s already-difficult task has been complicated by internal turmoil and near-constant attempts by rivals to poach its top researchers, sometimes by offering them millions of dollars.
Last year, Altman was abruptly fired by OpenAI’s board of directors, and some researchers wondered if the company would continue. Altman was quickly reinstated as CEO and set out to overhaul OpenAI’s governance structure.
More than two dozen key executives, researchers and longtime employees have left OpenAI this year, including co-founder and Chief Scientist Ilya Sutskever and Chief Technology Officer Mira Murati. This past Thursday, Alec Radford, a widely admired researcher who served as lead author on several of OpenAI’s scientific papers, announced his departure after about eight years at the company.
Reboot
[...]
“As democracy is perfected, the president represents, more & more closely, the inner soul of the people. Someday, the plain folks will reach their heart's desire at last & the White House will be adorned by a downright moron.”
H.L. Mencken (1920)
H.L. Mencken (1920)
Re: A.I.
I found this to be an interesting read.
https://www.nytimes.com/interactive/202 ... -data.html
https://www.nytimes.com/interactive/202 ... -data.html
- KUTradition
- Contributor
- Posts: 15124
- Joined: Mon Jan 03, 2022 8:53 am
Re: A.I.
Have we fallen into a mesmerized state that makes us accept as inevitable that which is inferior or detrimental, as though having lost the will or the vision to demand that which is good?
Re: A.I.
Meta CEO Mark Zuckerberg has raised fresh concerns about the future of developer jobs, revealing that artificial intelligence (AI) at Meta is already reaching the capabilities of mid-level software engineers. During a podcast with YouTuber Joe Rogan, Zuckerberg shared his vision for the role of AI in coding and the potential disruption it poses to the job market.
"We will get to a point where all the code in our apps and the AI it generates will also be written by AI engineers instead of people engineers," he said. For context, Business Insider reported that mid-level software engineers at Meta currently earn salaries in the mid-six figures — a cost AI could significantly reduce.
https://www.indiatoday.in/technology/ne ... 2025-01-13
So awesome!
Who needs people!?
"We will get to a point where all the code in our apps and the AI it generates will also be written by AI engineers instead of people engineers," he said. For context, Business Insider reported that mid-level software engineers at Meta currently earn salaries in the mid-six figures — a cost AI could significantly reduce.
https://www.indiatoday.in/technology/ne ... 2025-01-13
So awesome!
Who needs people!?
Re: A.I.
Meanwhile, someone I work with on occasion who is at a marketing agency just had 3 of their clients completely bail citing that they are moving forward with cost saving internal teams that will simply focus more on hyper targeted advertising through meta -- i.e. automation.
The whole "well people need to adapt" is true in a fuck off with your indifferent soulless "capitalism is the only way forward - just gotta ride it out" truth. This future is trash.
The whole "well people need to adapt" is true in a fuck off with your indifferent soulless "capitalism is the only way forward - just gotta ride it out" truth. This future is trash.
-
- Posts: 168
- Joined: Tue Sep 18, 2018 9:56 am
Re: A.I.
A business that provide services (not necessary physical services) but those that rely on knowledge workers to perform functions will, unfortunately, need to embrace A.I. in order to keep up with those that are at the forefront in their specific industry.
With that being said, if companies whole-heartedly rely on A.I. (whether using something in the cloud or internal LLMs) and 'create savings/profits' by cutting talent will also fail in the long-term. It's there to augment/enhance the abilities of the knowledge workers, not replace.
Consulting agencies are going to take the brunt of this first, and they should be embracing and building custom GPTs they can sell to their clients as a cost-savings option.
With that being said, if companies whole-heartedly rely on A.I. (whether using something in the cloud or internal LLMs) and 'create savings/profits' by cutting talent will also fail in the long-term. It's there to augment/enhance the abilities of the knowledge workers, not replace.
Consulting agencies are going to take the brunt of this first, and they should be embracing and building custom GPTs they can sell to their clients as a cost-savings option.
Re: A.I.
"It's there to augment/enhance the abilities of the knowledge workers, not replace."
This is simply not true.
By augmenting the abilities of one you can replace the abilities of ten.
Then the value of those abilities that the ten have is lessened while the companies, now able to do more with less, continue to eat.
This is simply not true.
By augmenting the abilities of one you can replace the abilities of ten.
Then the value of those abilities that the ten have is lessened while the companies, now able to do more with less, continue to eat.
-
- Posts: 168
- Joined: Tue Sep 18, 2018 9:56 am
Re: A.I.
This has been true throughout history. New technology comes along that makes 'the old way' obsolete and those individuals/businesses that choose to not adapt are eventually left in the past.
Re: A.I.
Well, as stated before then, here is my response:
"fuck off with your indifferent soulless "capitalism is the only way forward - just gotta ride it out"
And on a quick search, I see why kubowl99 is defending it.
Generating images from prompts is stealing from photographers and artists.
"fuck off with your indifferent soulless "capitalism is the only way forward - just gotta ride it out"
And on a quick search, I see why kubowl99 is defending it.
Generating images from prompts is stealing from photographers and artists.
-
- Contributor
- Posts: 13291
- Joined: Fri Oct 29, 2021 8:19 am
Re: A.I.
I assume this post will be me rambling on - because I have thoughts running around in my head - but nothing definitive and specific.
A little more than 20 years ago I worked on a trading floor (CBOE) when it was "open outcry" trading. Trades were recorded on actual pieces of paper.
5,000 or so people/humans were working on the trading floor.
Probably another 5,000 or so working in "back offices" in support roles. That's forgetting the "trading desks".
Then we went "electronic". MANY people/humans were ousted because there was no longer a need for them.
I think within a year or two of my leaving (in 2005) they were down to about 500 people on the trading floor. 4500 people gone. Some by choice, most by no choice.
Realize, a good percentage of the people working on the floor were NOT traders and were not "wealthy".
"Fuck 'em if they can't take a joke"! "Sucks to be them"! "They were dumb not to be prepared"!
"McDonalds is hiring". Etc. I heard it all.
The honchos pulling the strings didn't really give a fuck that the average folks who helped them get and stay wealthy - got fucked. Oh well. That's life. We live in a technological world. Right?
So..... A.I. related, rich, poor, smart, dumb, there are some really good, dedicated, hard working people, in the work force who are going to be unemployed because of A.I., and the argument in regards to that is what? That there are job opportunities that will present themselves in the A.I. world? Wonderful. Just fucking wonderful.
A little more than 20 years ago I worked on a trading floor (CBOE) when it was "open outcry" trading. Trades were recorded on actual pieces of paper.
5,000 or so people/humans were working on the trading floor.
Probably another 5,000 or so working in "back offices" in support roles. That's forgetting the "trading desks".
Then we went "electronic". MANY people/humans were ousted because there was no longer a need for them.
I think within a year or two of my leaving (in 2005) they were down to about 500 people on the trading floor. 4500 people gone. Some by choice, most by no choice.
Realize, a good percentage of the people working on the floor were NOT traders and were not "wealthy".
"Fuck 'em if they can't take a joke"! "Sucks to be them"! "They were dumb not to be prepared"!
"McDonalds is hiring". Etc. I heard it all.
The honchos pulling the strings didn't really give a fuck that the average folks who helped them get and stay wealthy - got fucked. Oh well. That's life. We live in a technological world. Right?
So..... A.I. related, rich, poor, smart, dumb, there are some really good, dedicated, hard working people, in the work force who are going to be unemployed because of A.I., and the argument in regards to that is what? That there are job opportunities that will present themselves in the A.I. world? Wonderful. Just fucking wonderful.
Life is one giant Gutterism.
-
- Posts: 168
- Joined: Tue Sep 18, 2018 9:56 am
Re: A.I.
I work in IT (and have for 25 years) and use AI as a peer programmer. God forbid I do something I enjoy. Playing around and generating pictures of myself using the technology available isn't stealing from photographers and artists. It wasn't like they were taking pictures of me and then I used them to create the model. I literally took 20 selfies and then loaded them into the model. Good grief get off your high-horse.
A.I. is just another tool I'd like to become more proficient in.
As far as the fear - I get it. I'm 51 and well aware that in a few years I'll be on the chopping block, and that A.I. could, theoretically, hasten that ending.
So what should I do? Sit around bitter, worrying, and pissed off about it or actually trying to embrace the INEVITABLE change and hopefully do something good with it?
Corporate America will look for anything to save a buck so their shareholders are happy. But again - this is new?
A.I. is just another tool I'd like to become more proficient in.
As far as the fear - I get it. I'm 51 and well aware that in a few years I'll be on the chopping block, and that A.I. could, theoretically, hasten that ending.
So what should I do? Sit around bitter, worrying, and pissed off about it or actually trying to embrace the INEVITABLE change and hopefully do something good with it?
Corporate America will look for anything to save a buck so their shareholders are happy. But again - this is new?
Re: A.I.
"So what should I do? Sit around bitter, worrying, and pissed off about it or actually trying to embrace the INEVITABLE change and hopefully do something good with it?"
Well yes and no.
I think it's perfectly acceptable to be bitter, to worry, to be pissed -- you don't have to like the way our juiced up society just says, well, just adapt or die. Why are we content with being OK with destroying occupations that both give clients a service while also bringing the people who work on it joy? Why does the 'because we can' mean 'we should'.
My friends are loosing their jobs here. Developers, art directors, copy writers, illustrators, story board artists, animators, producers, client strategists, project managers etc. This is shitty. How is this a good thing?
On the other hand, no, because apparently bottom line is simply the only thing that matters, in order to survive you have to learn this soul crushing technology which removes humanity and promotes cold efficiency. Embrace is the wrong word for me. Accept sadly is better.
Well yes and no.
I think it's perfectly acceptable to be bitter, to worry, to be pissed -- you don't have to like the way our juiced up society just says, well, just adapt or die. Why are we content with being OK with destroying occupations that both give clients a service while also bringing the people who work on it joy? Why does the 'because we can' mean 'we should'.
My friends are loosing their jobs here. Developers, art directors, copy writers, illustrators, story board artists, animators, producers, client strategists, project managers etc. This is shitty. How is this a good thing?
On the other hand, no, because apparently bottom line is simply the only thing that matters, in order to survive you have to learn this soul crushing technology which removes humanity and promotes cold efficiency. Embrace is the wrong word for me. Accept sadly is better.
-
- Posts: 168
- Joined: Tue Sep 18, 2018 9:56 am
Re: A.I.
I understand there is real-world consequences and don't discount that. I will say it's probably going to happen much, much faster than anyone would like.
Re: A.I.
good discussion!
I don't think capitalism is necessarily the only way forward. I think there are enormous obstacles to the alternatives tho, namely things like - systemic change is very very difficult. And so many otherwise-decent people have been so conditioned to believe things like, capitalism IS the only way forward, and capitalism is the best system, or even a good (and not inherently rotten) system, and all the other boogeyman systems are the rotten ones, either way.
I think it's so true that capitalism "will look for anything to save a buck so their shareholders are happy." This is very much a feature of capitalism. Maybe even thee feature.
At the end of the day, capitalism's end game is accumulation of wealth, and individual accumulation wealth at that. And this reckless ends-justify-the-means mindset to go with. It's not making the world a more inhabitable, or more sustainable, or more equitable, or more enjoyable, or better place.
In fact, doing just the opposite! It's instead destroying the world, with its ideals of infinite consumption for the sake of increasingly concentrating wealth and resources into an increasingly small number of Evil Rich People at the top. "Because apparently bottom line is simply the only thing that matters."
AI could and would and should be used to make the world a better place, and make it easier for us all. If we can automate more stuff we need so folks can spend more of our lives actually doing the shit that makes life worth living, why shouldn't we?
Oh yeah...because capitalism. In this system, AI is being exploited in pretty much the exact same way the working class has always been exploited: for the construct of accumulation of wealth, disproportionately to the capitalist class, with no regard for the bigger consequences.
If we want to use AI for anything but, then we're gonna need a whole new economic system.
I don't think capitalism is necessarily the only way forward. I think there are enormous obstacles to the alternatives tho, namely things like - systemic change is very very difficult. And so many otherwise-decent people have been so conditioned to believe things like, capitalism IS the only way forward, and capitalism is the best system, or even a good (and not inherently rotten) system, and all the other boogeyman systems are the rotten ones, either way.
I think it's so true that capitalism "will look for anything to save a buck so their shareholders are happy." This is very much a feature of capitalism. Maybe even thee feature.
At the end of the day, capitalism's end game is accumulation of wealth, and individual accumulation wealth at that. And this reckless ends-justify-the-means mindset to go with. It's not making the world a more inhabitable, or more sustainable, or more equitable, or more enjoyable, or better place.
In fact, doing just the opposite! It's instead destroying the world, with its ideals of infinite consumption for the sake of increasingly concentrating wealth and resources into an increasingly small number of Evil Rich People at the top. "Because apparently bottom line is simply the only thing that matters."
AI could and would and should be used to make the world a better place, and make it easier for us all. If we can automate more stuff we need so folks can spend more of our lives actually doing the shit that makes life worth living, why shouldn't we?
Oh yeah...because capitalism. In this system, AI is being exploited in pretty much the exact same way the working class has always been exploited: for the construct of accumulation of wealth, disproportionately to the capitalist class, with no regard for the bigger consequences.
If we want to use AI for anything but, then we're gonna need a whole new economic system.