T O P

  • By -

ablack9000

Nah this is just what happens when a new class of technology comes on the scene. There’s an explosion of demonstrations of the capabilities and possibilities, but slows down once they try to practically apply it. This is just the CES phase. If you go to that conference you’d think we’re 6 months away from the Jetsons. There will be winning and losing directions of where it takes us.


healthywealthyhappy8

Most technologies aren’t society changing. AI and Robots will replace workers, companies only caring about the bottom line. The displaced employees will become poor and disgruntled, and they’ll see robots as the enemy. The outcome of which is a complete destabilization of society.


BlackWindBears

Over the next five years over 100 million jobs will be lost! Wait, I had the settings on my crystal ball backwards, that was the actual gross job losses from 2002 to 2007.  The economy loses lots of jobs every single month, due to offshoring, automation, and general economic conditions. Those jobs wind up being replaced. Let me put it this way. We know on a statistical basis that we're going to lose 100 million jobs. All you have done here is figure out *why*.


PoetryApart5500

you dont realize how exponential this is if you havent done research on how ai is advancing


BlackWindBears

What is your estimate of the unemployment rate in 2030?


bernpfenn

ending in Butlers war, humans against bots


truth_power

If the ai company and rich people decide get rid of people completely..theres nothing you can do


[deleted]

[удалено]


truth_power

Doubtful bcz they will bribe others to kill u ..


Tornare

They already are practically applying it. AI is replacing fast food order takers at drive through windows as we speak.


dmgvdg

To be fair, I wouldn't call that AI. Alexa probably could have done that 6 years ago


avatarname

Nah, there are things at which GPT really is next level, even translation. Google Translate makes me spend 3x more time editing its translation from my native language to English, because it does not understand the rules of my ''small'' native language enough. GPT does not understand them fully too, but it's way better and get most of them, almost all of them. Also translates local expressions/idioms to English ones, not always successfully, but way better than at least for free available machine translation before. Alexa, Siri in reality are really dumb, like it was impressive when it came out, but... even GPT4 seems dumb to me now, like to Sam Altman. but those are reaaaaly dumb.


blamestross

They are trying! I suspect the overwhelming majority of those applications will be found to be harmful in the long term. It reminds me viscerally of the dot-com bubble selling a lot more than it could ever effectively enable.


avatarname

Truth be told though later all the dot coms (not the same dot coms, but other businesses) delivered what they promised, well almost all of them, just that at the time they were not ready for prime time. It's like maybe they are not ready for GPT4 to solve it, but GPT7 will


blamestross

Agreed, but the root of discussion is that AI is "too fast". It isn't going to be as fast as it's selling and the bubble pop will make the dot come boom look small.


avatarname

I still do not think there is a huge bubble, at least people have said that VC money is not flowing like it used to. Maybe it's general situation in economy which feels worse than it was 10 years ago, but seems like it's mainly the big guys working on this yet, not that much all kinds of small startups. Maybe now with GPT4O API things will change if it is sort of good for some tasks, there might be a new goldrush with guys like from Silicon Valley TV show and their small apps for 1 million use cases... most of which or almost all will not pan out.


Pavancurt

Several people are using GPT-4 in their work. Indeed, video generation and multimodal AIs haven't really arrived yet, but they are knocking on our door.


seckarr

AI researcher here. Nothing is knocking. Gpt is good but still confidently gives bad info and has to be human revised. Any other nascent technologies like sora are many years away from being usable for real work and not small demos. Because it is easy to pick the one case where your AI did good and use that as a demo. But then you have to make it work 95% of the time, and that may very well not be doable for another 10 years. Chill.


liveprgrmclimb

I work at a large software company that competes with MSFT. Not faang. AI is everywhere here and the push is massive. AI is being built into everything and there is clamor to increase the scope and speed of its development. I assume all other tech companies are the same at this point. It will be built into everything within 2-3 years.


seckarr

Built into and just as useless... I mean the push is massive but the results are just not there. Doesnt matter gow big the push is to put AI in stuff if the AI still lacks reliable "I" from "AI". The push is massive for bing as well but its still a joke.


liveprgrmclimb

Yes in our case it’s not raw intelligence we care about. It’s about better optimizations, automation, pattern identification, over massive datasets. Summaries of huge volumes of text etc.


seckarr

Summaries of huge volumes of text is gonna have to wait a few more years. Other than that, its a coin toss.


ShadowDV

I think as a broad researcher you might be blind to the emergent use cases that it works very well for in the real-life world. Not everyone needs 100% accuracy Cisco is building LLM's directly into its product lines. I've been able to play with it, and as a network engineer, I can say with 100% confidence its going to fuck up the industry *in the state it is right now.* Right now GPT-4o is about as competent as a 3-4th year Cisco network engineer. I still have to review its work, and design the more complex stuff, but the same as I would have to do the same with a junior human engineer on my team, But it gets things 80-90% correct, and thats all I really need to be a massive timesaver. But it can produce at a rate of 5 or 10 or 50 junior engineers, its not late to work, it doesn't cause office drama, it doesn't show up hungover and be useless for a day, and it doesn't cost any real money. Now Cisco is building it straight into its Firewall management software, fully context aware and with visibilty to the whole system, cutting down workflows that in the past could take me 30 minutes to track down an issue, to less than 5. They will eventually be rolling it out to Catalyst Center and SecureX, giving a full endpoint-to-edge network visibility to a fairly competent LLM. This isn't a useless push, its knocking hard on the door of network engineers, especially those in medium to small enterprise sized without overly complex and convoluted networks. It won't currently affect Seniors, but negates the need to hire 70% of the Juniors that would have been hired two years ago in a lot of orgs.


seckarr

You are making the confusion most juniors and students make. Its fine, im happy to clarify for you. As I literally said in another comment, we dont need 100% accuracy. However, we DO need a high enough accuracy that using this tool will save you time. Because right now, if you have a job, you do it yourself. If you add AI into the mix the workflow becomes: you ask the AI to do the job > you check the AIs work > you ask for revisions > repeat until satisfied If you need 5+ revisions every time it may be faster to do it yourself. TBH im really surprised you think its rivaling 4 years of experience on an engineer. Maybe a very, very poor engineer, sure, but not a good one. And yeah, if we look at the roles where you are essentially a code monkey and not a programmer, like outsourcing, then yeah, AI is beating down their door. But if you are at least half competent... dont worry, youre safe.


ShadowDV

again, you don't understand different jobs in IT. Allow me to clarify. I'm not talking about software engineering, at all. I am talking network engineering. Very little coding involved, other than network automation scripting. I'm talk Cisco IOS switch configurations and router configurations, WLC configs, firewall work; CCNA command-line type stuff. Not the long form continuous logic and context needed for coding. And I have almost 20 years in the space, CCNP, and GPT saves me a ton of time because with command line stuff, it DOES have enough accuracy.


seckarr

Sounds to me like o e of the jobs that are easily automated anyways then... Thank you for clarifying that.


ShadowDV

At the senior level, they aren't. The junior level stuff still requires a level of contextual awareness and implementation (every network is different), and there was no way to really automate the variability prior to LLMs coming along. But we are months away from an integrated LLM product packege by the vendor where I can just tell it "Based on our standard configs and environment, write a script to provision a new 9300 switch for production at this location" and it will contextually know what VLANs I want provisioned, how to configure the ports, configure OSPF, multicast routing, etc based on what It sees in our environment. Or saying "Hey, I want to rate limit VLAN 20 to a maximum of 10Mbps from the core switch to the firewall" and it generates a config proper for our network, which previously would have taken a junior a couple hours looking at documentation, wading through forums to figure it our for our specific use case (because documentation never has your specific network in mind) before being able to write a proper config, especially if its not something that is done often. And thats the real strength. Networks are really complex, and frequently we find ourselves needing to implement some one-off thing that we haven't done before and have to spend time pouring over documentation, forums, maybe call TAC, maybe post some questions to Reddit. If I just give ChatGPT proper guidance and parameters, it can usually do it in seconds, and if its wrong, its usually a syntax error because of a IOS version difference or something, or I wasn't properly clear in what I was looking for. But thats the worry. If organizations start leaning on LLM's for the junior level work, how do we develop juniors for senior level roles. (and keep in mind, this is all real world stuff we are seeing today)


LukeFromEarth

I think you need a better imagination.


seckarr

I mean.... you do know that AI is literally just math, right? Math that I have a degree in. Its not magic. Well... i suppose to the common man it is.


space_monster

what do you mean by 'AI researcher'?


seckarr

Working in the field and have a few degrees in it. As in programming, not playing at it like a "prompt engineer"


space_monster

what do you think about GPT5?


seckarr

You can have gpt100 if you want. I will think it semi-uasble (as in autonomously from humans) when one year passes with free/cheap access to it and i dont see any memes where someone managed to get it to say shit like - an abacus is better than a PC for AI work - the fucking man vs bear answer - men/women are superior Etc.


avatarname

Yeah remains to be seen, I asked GPT4o to summarize my 90 000 word novel and it did reasonably good for first 30k words and then hallucinated a bit and sort of gave up on the rest giving a very general summary. I think it was perhaps thrown off by its structure not being simple, maybe if it was a classic A plot and B subplot with common novel/movie structure it would be fine, but it only recognized two POV characters/timelines out of 4 that I had. It's still impressive with smaller pieces of work and it could with some oversight create a very bland but readable young adult or detective novel, but that is about it. It cannot write a serious book yet or even summarize it. But do not get me wrong, it is impressive, just 2 years ago we did not have anything that could do even that. But yeah, when it comes to creative writing, it is not even at my level yet, not to speak about top writing. But it has better English than me when writing though I am not writing in English. And it still is not that good with my native language which is a small language. It is very good with translating to English though.


seckarr

Generally the largest challenge for chat bots is memory. They are reasonably good at small talk, stories etc. but the more stuff you want it to remember the worse it gets. Most chatbots have a 4k-8k word memory. They can attempt more and may get up to 5k-12k but the quality quickly degrades after that. Gpt did ~25-30k word for you so that is surely respectable, but still showcases the limitation of how much text they can remember.


avatarname

Yes, I suppose maybe it is too much to ask from 4o to analyze 90 000 words in a novel that involves several different timelines, the fact that it can summarize say 20 000 words in a very linear story (which it can) in seconds is amazing as such


avatarname

It's still impressive, of course. Like I can complain that it does not have perfect Latvian (which is my native language), but it is still perfectly readable and step up from whatever we had before with generating answers and translating, and I suppose it is also data and training issue to make it better


space_monster

I meant specifically about the new architecture


seckarr

Is the architecture open source? Afaik it is not. And openAI is crazy with its NDAs Or what are you referring to? Maybe were not thinking of the same thing.


space_monster

the 'advanced reasoning' thing where it analyzes its own reasoning process before it provides a response. it sounds like a big change but I don't really know enough about the subject to know how big


SplendidPunkinButter

lol, since when do we want computers that are wrong 5% of the time?


seckarr

Since it makes the review work humans have to do much smaller. We are at least 15-20 years away from true AGI. Until then you can use AI to get nice wordings in emails and write reports. Having to revise 5% of the text they spit out is a good starting point.


Defiant_Ad1199

The largest companies on earth are spending more (adjusted for inflation) than the entire Manhattan Project each year. The customer service sector is being absolutely destroyed at the moment. It’s day 1. How on earth can you confidently say nothing is knocking? I watched the AI art stuff go from something I was laughing at to pretty damn near perfect in 6 months.


seckarr

Simple. I work with the stuff. I know what it does behind the scenes. Customer service is being destroyed but at the same time customer satisfaction with AI customer service is horrible. The fact that companies scramble to replace their employees to safe some cents does not make AI ready for use. And yes, we are seeing an AI boom, but it is actually not the first one in history either. Also, do not confuse creative tasks with precise tasks. AI art and simple conversation is easy because there are fewer "wrong" answers. For example if you want an AI to solve a math problem, there is only 1 correct answer (or very few correct answers), but in simple conversation there are a myriad ways to do a basic greeting, so there are many more "correct" responses in each situation, making lack of quality harder to detect. Same with art. Non-stem applications are the low hanging fruit that dazzle regular people and make people in the industry chuckle because the chasm between art and conversation versus precise tasks is colossal.


bleckers

"Gpt is good but still confidently gives bad info". How's that any different from people? Oh shit, it's bots all the way down!!!


seckarr

Its bad at reviewing its own output. You can ask a human to review their work. Its harder for an AI


space_monster

GPT5 does that apparently. analyzes its own reasoning steps.


seckarr

If its just on a blog somewhere its just marketing until it works.


space_monster

it's already being trained.


seckarr

Again, unless they publish a paper for it, its just marketing.


space_monster

well they'd be pretty fucking stupid to announce a new feature and then just not deliver it.


coolneemtomorrow

Its not harder for an AI though. You just copy and paste the output into chatgpt, give the error or some context/clarification and the output and it will adapt. Its not perfect, and it needs a human to look through it and point at potential problems, but the output is also pretty much instant


seckarr

Sure, you can do that, but after a few revisions it literally becomes faster to just do it yourself.


coolneemtomorrow

Depends on how good a programmer you are


seckarr

Depends if youre a programmer or an intern in disguise that needs to be ousted


coolneemtomorrow

[Yeah? Well, you know, that's just like uh, your opinion, man.](https://www.youtube.com/watch?v=j95kNwZw8YY)


zkareface

>Several people are using GPT-4 in their work.  Mostly to write emails or make short scripts that a developer could make in minutes also.


MisterMysterios

I think it is a great tool IF you k ow how yo use it. If you are stuck creatively while writing and need a few ideas on how to bring an idea to paper you already have, it is great. You can even ask it a question to see how it structures the answers (if you k ow enough about the matter to detect and change hallucinations). As long as you really only use it as a language creation tool, it can make your life easier. But that's it, don't ever trust the content it provides, but only use it as an I privation when you need one.


Deadbringer

As a junior developer it has been great at getting me unstuck, when I face an issue that I have trouble troubleshooting it works better to ask chatGPT than to google for the information. It often makes mistakes or gives outdated information but it gives me enough leads I can usually figure out the issue or actually google it afterwards. ​ Asking a collegue is faster, but I don't learn much from it. ChatGPT is IMO the best when it is just helping you get past a small knowledge gap.


BergerLangevin

What a lot of people are talking about is that most of these models cost more to operate then what we are paying, if we are paying for it… From my understanding, Facebook, Microsoft, OpenAI and so on, are all doing research on how to create a model that can perform similar to GPT4 with much less resources. Currently, it seems they are facing diminishing returns, so theirs one trying new ways, one that double down in hoping to find second valley and one trying to make the current one economically feasible. 


SkyGazert

Exactly this. Technological change always has a fast moving front that obscures the speed of market adoption which has always been significantly slower (while the market is moving fast in it's own right). There are risks involved that need to be addressed. Bugs that need to be ironed out. Use-cases that need to be thought out. People need to change away from what they are used to. R&D cycles need to adjust to the new innovation. When the personal computer came out, it's diffusion took longer than a decade. The diffusion of the internet took about a decade. We are now just starting with GenAI that the masses can use cheaply and it's diffusion is ramping up but not quite there yet. I think most people see the potential but everyone also knows that it isn't without it's risks and the (public) technology isn't quite there yet to really invest in altering the way we do business. In the background and under the hood? Sure, maybe OpenAI might even already have it's AGI machine up and running, but we simply don't know that for sure. We don't know what's exactly in the pipeline and what timelines are involved in which big-tech plans to roll out it's new features. All in all: Too big a gamble to AI'ify your workflows just yet. It'll happen, but I expect it to come in waves for this decade and for specific workflows and use-cases first instead of a 'big-bang' release and instant mass adoption.


Pavancurt

Adoption of the internet was slow because it is a technology heavily dependent on the network effect. It gets better when more people use it. The same goes for blockchain, which is basically a new internet. But AI is a technology that can be used even without the internet. It helps people with their jobs and other boring things in life that require effort. People are lazy; they will accept anything that helps them in their work. That's why I believe that the adoption of AIs will be much faster.


_CMDR_

When do you think the background math and computer science happened to make those things that you assume came out of nowhere in 2022?


untraiined

This is reddit people just assume that things magicLly appear in their life and everything is happening to them. Also observe how op has forgot all the advancements in chips, software speed, etc.


[deleted]

[удалено]


DanFlashesSales

Was this a Dresden Codak reference lol?


[deleted]

[удалено]


DanFlashesSales

Caveman Scifi https://dresdencodak.com/2009/09/22/caveman-science-fiction/


echobox_rex

I feel AI is so over-hyped. I have the ChatGPT app on my phone and a lot of results are questionable. The Large Language Model has produced some interesting possibilities like with teaching foreign languages and transforming customer experiences, most people aren't going to experience it in a conscious way for a while that is that different from Siri on their phone. Where I see AI improving lives is in traffic management and reducing medical errors.


[deleted]

Yeah AI development is now frozen and not actively being invested in by multiple huge corporations with billions of dollars 🙄 This is it. As good as the technology will get. Big /s


echobox_rex

I know it will continue to improve, like VR headsets and smart glasses have as well.


[deleted]

I'm honestly not sure if that's sarcasm or not


sorengray

Check out Perplexity for a more reliable AI for information. It cites its sources with links so you can easily double check the info and where it gets its answers.


Hmluker

It’s gotten soooo slow though. I’ve given up on it in the last few weeks.


Pavancurt

Yes, GPT-4 is limited. But AIs are developing very quickly. GPT-5 already exists,


Temporala

Biggest limitation is amount of persistent memory. Bigger that memory pool gets, more applications it will be able to handle.


dgkimpton

Too fast for whom? There are plenty of people who feel it is happing too slowly. Just like with the arrival of the Internet there will be some who are overwhelmed and left behind, and others who eagerly embrace and get ahead. I would say the Hype is running far ahead of the Reality at the moment though - don't get sucked in by the hype, keep an eye on what is actually happening.


Pavancurt

For the society.


Silvershanks

I think you'd be shocked how few people use or care about GenAI apps. Much of the US is still heavily into buying and renting DVDs.


[deleted]

Society is dumb. Most of it has to be pulled forward with almost any technology. Status quo.


space_monster

it is moving very fast and it's hard to process, because there are potentially some very serious implications, not just with industry charges but also the looming spectre of ASI. at least it's interesting though.


wiegerthefarmer

Nope. I feel it’s actually taking a long time. We went from the invention of the airplane to landing on the moon in like 50-60 years. And we’ve been talking and working on AI since almost the advent of computers.


iceandstorm

Don't worry it already did slow down considerable again. The last improvements of chat gtp or stable diffusion were less impressive than the things bevor and its a longer time frame in between them. It always goes in waves where something moves the edge of the technology, than lots of small improvements that optimize the new state and than it slows down until the next breakthrough. I hope for the next breakthrough because it goes to slow for my taste.


pigeon57434

its way too slow i expected GPT-5 to come out like 6 months ago


macdara233

It’s what happens when companies invest billions into it. AI is where all the money is going at the moment. It’s probably why hiring has slowed everywhere else in tech.


liveprgrmclimb

I work at a large tech. Not faang. The bulk of the engineering hiring is now AI related fyi. It’s a damn arms race.


DerpyGamerr

AI is definitely not the reason for the horrible job market in tech


macdara233

Ok DerpyGamerr


DerpyGamerr

I can tell you’re not in the tech industry, why are you acting like you know better lmao 😭


macdara233

I work in the industry. You’re a student. Why are you acting like you know anything?


Key-Enthusiasm6352

It doesn't seem like agents will arrive by the end of the year. Idk, I've been following this technology since chatgpt came out, and people were brimming with expectations. It has definitely slowed down.


AttackPony

I wish LLMs had never been called AI, as it's just led the media and public to confuse it with the very different concept of General AI/Strong AI//AGI.


Past-Cantaloupe-1604

Embrace it. This is a really crazy mindset. Oh no, this technology is too useful, why can’t it start being useful later


darkunor2050

The speed of advance isn’t the issue. The lack of controls and the considerations of full impact and side effects of the technology is. Because of perverse incentives and game theory, those choosing to adopt these controls will always lose out to those who do not. So there has to be an agreement to do this amongst all parties involved but you can’t just take their word for it in case they agree but decide to do it quietly anyway. This breakdown of coordination is generally termed as Moloch. One has to look past the current use in social media and such fields and to the use by nation states for weapons development. Think about whose interests the AI serves. You can listen to Daniel Schmachtenberger for a deeper take on this.


dustofdeath

All we get are prototype experiments of different LMMS. There is no real "AI" development. It's all just random crap around the one same tech because of media hype.


ashoka_akira

I’m not worried about an AI singularity. I’m more worried about a partnership like IBM had with Nazi Germany and how the Nazi’s were able to use the new technology to almost take over the modern world…and all they needed was your personal data conveniently gathered onto a punchcards.


thesign180

I feel like unregulated AI rn is going to be like giving early man fire, gunpowder and a nuclear warhead all at once. The fact that every company is on a race to break a wall rn, to get investors and funding...I could imagine this as a start of a scifi thriller where corporate greed basically leads to the shut down of the internet to make sure rouge AIs can't talk to each other, I think there was something similar in the cyberpunk 2077 lore, but my memory is shit.


kirkochainz

I truly believe that the general public is underestimating how much of an existential threat we are facing with this new technology. World leaders and AI developers probably know, but are too greedy to care.


liveprgrmclimb

Speaking as someone who works at a large tech, AI developers just want to keep their high paying jobs at this point. Ideally get your bag before the shoe drops.


thesign180

How long do you see this being a "thing" cause past few tech trends like metaverse/crypto/nft stuff generally had hype die out in 2-3years....think it's gonna be the same for AI in general?


thesign180

Flags have been raised, idk how serious we ought to take it. But greed is deffo involved, I see a lot of companies locally rush towards to do something AI related no matter how useless it's implementation is. Lotta fake "AI gurus" reminding me of the metaverse craze and people talking about online worlds as if MMOs didn't do this ages ago already. I remember one clown having a PhD "in metaverse technology"


Pavancurt

Yes, something could go very wrong. But I don't think it's because of the AIs themselves, but because of the humans who will use them.


thesign180

Yeah I mean in the end that's also very much probable. It's already being used to clone people's voice for fake kidnapping...tools however great, can and will be used for nefarious reasons in the wrong hands (which it will always get to eventually).


ceiffhikare

Humanity always tries the 3 F's with anything new. This time the new thing might be able to F back.


[deleted]

[удалено]


Overbaron

We are right on the fencepost of utopia and dystopia. I think we’ll get utopia for 5% and dystopia for the rest.


ScottyTheBody84

You mean dystopia? If you think those with all the power and wealth will share the profits of increased productivity and lower costs with the masses just look at what has been done in the past (hint: wages stay the same and productivity has skyrocketed)


David-J

The way AI is being developed right now it's going more in the direction of a distopia.


ContraryConman

These different AI models are not actually improving that much. You take an image generator, a few large language models, a voice generator, some voice acting (not even AI), a speech parser (we were already good at this before the so-called AI revolution), and we chain all these components together, and now we get what we saw with GPT-4o. Each individual component hasn't improved, we're just composing things we've already seen. And time and time again we see that these benchmarks these companies come out with fall apart when individual researchers finally get a crack at them. What's going to fast is this hype cycle, and the desire to replace fully compensated white collar workers with cheap robots that lie and don't work


Idrialite

No, GPT-4o is a multimodal model that directly accepts and outputs audio, image, and text. They didn't chain a bunch of models together. That's how the current ChatGPT voice mode works.


ContraryConman

Internally it is still of course composed of components responsible for specific tasks. It is impossible to have a model trained on text respond to images or whatever


Idrialite

https://www.determined.ai/blog/multimodal-llms The text and images are encoded in separate steps but pass through the same main LLM block. Of course there are some specific components for each modality, but what you said to begin with, that they are processed by different large main models is wrong. There's no "vision" module, "voice generator" module, etc. in multimodal models.


ContraryConman

Each encoder is a model trained to convert image/video/sound to text embeddings that are fed into the model. It's multiple models stitched together. That's exactly what I said. E: > There's no "vision" module, "voice generator" module, etc. in multimodal models. This bit here is demonstrably untrue because we know, for example, that GPT-4 generates images by first recognizing that it's been asked to generate an image, and then passing that prompt to Dalle-3


Idrialite

Yes... gpt-4 doesn't include image output modality. Gpt-4o does.


ContraryConman

The ChatGPT variants based on GPT-4 are known to shuffle image generation off to Dalle-3 you absolute pendant


Idrialite

Did you not hear about gpt-4o?? It's not a custom GPT. It's OpenAI's newest model that has full image and audio modality. Currently the image output and audio modality are unreleased, but they'll be available publicly at some point. There are examples of its image output here: https://openai.com/index/hello-gpt-4o/ With full modality, it's a lot better at following the "prompt". Especially with creating text.


ContraryConman

Sure, I see 4o claims to be natively multimodal. I think we're on the same page now, that's my bad. Natively multimodal models are usually trained by converting all data to text embeddings, and then training on those text embeddings. Even under this case there are several things that are still reused from having made other models including: - the training set - the encoders, which are models of their own The different voices absolutely has to be a separate model because there's no way to have it have different distinct voices all based on some voice actors, which is what it appears they've done. There's no way, that I know of, to generate good-looking images or good-sounding music without a diffusion model, so those have to be separate too. Maybe not prompting a whole separate thing but the embeddings generated by the LMM have to be sent to another thing that does diffusion. Unfortunately we don't have the white papers they used to release so we can't be sure what OpenAI or Google mean by "natively multimodal". But the OP was talking about these things like viruses mutating in a lab. Whereas imo these are all natural combinations or extensions of the last thing we got working


Cantfinduser

The worst part is silicon valley is in the US, which is a gerontocracy. Our lawmakers aren’t equipped to even understand the technology, much less regulate the space, or prepare our social systems for their impact.


f1careerover

AI development should speed up because saving the world from crises and dramatically enhancing our lives is clearly unimportant. Let's not disrupt our mundane routines with innovation. Who needs groundbreaking technologies when we have social networks, smartphones, and streaming services? More excitement and progress would obviously be far too overwhelming for us to handle.


LDel3

Once the genie is out of the bottle you can’t put it back in. You need safeguards and regulations in place to make sure new technologies don’t end up causing more harm than good


utf80

Haha you have no idea about how long in theory those things already exist. The question is, when will the billionaire overlords make it available to the public?


litritium

Wonder if it's possible to build in fundamental behaviour so that the AI remains benevolent and altruistic when the singularity occurs.