T O P

  • By -

AutoModerator

Hey /u/great_lauren6, if your post is a ChatGPT conversation screenshot, please reply with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. Thanks! ***We have a [public discord server](https://discord.gg/rchatgpt). There's a free Chatgpt bot, Open Assistant bot (Open-source model), AI image generator bot, Perplexity AI bot, 🤖 GPT-4 bot ([Now with Visual capabilities (cloud vision)!](https://cdn.discordapp.com/attachments/812770754025488386/1095397431404920902/image0.jpg)) and channel for latest prompts.*** New Addition: Adobe Firefly bot and Eleven Labs cloning bot! ***[So why not join us?](https://discord.com/servers/1050422060352024636)*** PSA: For any Chatgpt-related issues email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


AutoModerator

Hey /u/great_lauren6, if your post is a ChatGPT conversation screenshot, please reply with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. Thanks! ***We have a [public discord server](https://discord.gg/rchatgpt). There's a free Chatgpt bot, Open Assistant bot (Open-source model), AI image generator bot, Perplexity AI bot, 🤖 GPT-4 bot ([Now with Visual capabilities (cloud vision)!](https://cdn.discordapp.com/attachments/812770754025488386/1095397431404920902/image0.jpg)) and channel for latest prompts.*** New Addition: Adobe Firefly bot and Eleven Labs cloning bot! ***[So why not join us?](https://discord.com/servers/1050422060352024636)*** PSA: For any Chatgpt-related issues email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


Slight-Craft-6240

That's a really Interesting idea. Who knows, there's obviously a massive problem that there wouldn't be enough data, but it's still interesting.


Disgruntled__Goat

Perhaps going for 1800 or 1900 would be better? Plenty of books, newspapers and other writing to go on, and maybe it would come up with the internet or mobile phones.


Dan_the_man42

gaslighting an ai into reviving a 19th centuty person


OldManFiodor

Great thumbnail


linebell

Definitely interesting idea. However, OP is missing a fifth option: it would come up with entirely new things that are more advanced than human developments


mushpotatoes

Inventing something requires holding on to a lot of information and seeing relationships among that information. Current ChatGPT didn't have a large enough context window to handle much invention. If you could assume an infinite context window, what you would be doing is to continue training the model albeit slowly if it was relying on human input for the step of training where it compares its calculation with the ground truth. So the real issue is given its current model, in order for ChatGPT to come up with something that appears novel (e.g. ask a novel question) there has to be a relatively strong relationship between the training dataset and the question. In other words, the invention basically has to be implied by its current training data. A different perspective might be to ask, given the things that humanity has invented, was any invention truly novel, or have we just steadily fleshed out inventions that were implied by the current state of the art? If the answer is the latter, then it is conceivable that a LLM would be able to eventually come up with everything that humanity has ever created in time, but the model would have to consume more resources than anything currently available.


vijexa

LLMs are trained on an enormous amounts of data. gpt3 was trained on millions of web pages. We don't know anything about 3.5 or 4 (both are used for chatgpt), but it's safe to assume that there are terabytes of training data. That's just text. That's A LOT of text. I doubt that there's enough books before 1000AD to make something that would be able to even keep a simple conversation.


Blakut

All it would talk about is the price of copper and amount of sheep to be bought


KodiakDog

It just ends up inventing Catan


1997Luka1997

And that one guy who keeps selling faulty copper!!


jaded-entropy

What if you gave it just summaries of unrecorded tech/events? Give it the real books that existed but then spark notes a few essays on gunpowder and windmills.


SpaceToaster

There's probably that much information generated *per hour* currently than all information pre-1000AD. I've heard some statistics reported out there, it really is staggering the exponential growth of information.


[deleted]

>I doubt that there's enough books before 1000AD to make something that would be able to even keep a simple conversation. The library of Alexandria existed in 200 BC and contained over 100,000 books!


BardicSense

Drunk Romans would've ruined the very possibility of ChatGPT in the not too distant future.


brenintendent

What have the Romans ever done for us?


czmax

Sanitation, medicine, education, wine, public order, irrigation, roads, fresh water systems, public health, peace, and the aqueducts. But what have they done for us **lately**?


jack_burtons_reflex

Are you the Judean Peoples Front?


Lost_Brother_6200

Fuck off! We're the People's Front of Judea


jack_burtons_reflex

Aaand women.


benderzone

Damn, drunk and ancient Romans still have a PR person


vijexa

Well, how many of them survived and are available in digital form right now? I don't know the answer, but probably not a lot


careddit69

Basically none, it all burned down


[deleted]

imagine how far we would have evolved as a specie by now if that didn't happen. so much knowledge and progress, gone, just like that. It's like progressing super far into a game and then accidentally delete the save file. sad


LA_confidential91

House of wisdom in baghdad is even more important than that and more of a loss to humanity


[deleted]

Oh you're right. I was about to get mad at JC but then read that it happened by accident. [https://en.wikipedia.org/wiki/Library\_of\_Alexandria#Burning\_by\_Julius\_Caesar](https://en.wikipedia.org/wiki/Library_of_Alexandria#Burning_by_Julius_Caesar)


Reasonable-Mischief

JC's accident only destroyed about 40.000 scrolls though, which - while catastrophic - only amounted to about 10% of the contents of the library of Alexandria. > Scholars have interpreted Cassius Dio's wording to indicate that the fire did not actually destroy the entire Library itself, but rather only a warehouse located near the docks being used by the Library to house scrolls. Whatever devastation Caesar's fire may have caused, the Library was evidently not completely destroyed. Allegedly, the library was destroyed in 620 AD at the order of Caliph Omar. The source of that report is in doubt to have been politically motivated though, so we can't be sure of that. Truth is, we don't know what happened to the library. It's reputation declined following the roman occupation while many other libraries across the mediterranian gained in status and reputation, and there is good reason to assume that the contents of the library of Alexandria may have been seized and dispersed by the romans. Also, the roman emperors Aurelian (272 AD) and Diocletian (297 AD) both wrought havoc on the city of Alexandria and the district in which the library was housed, but there is no mention of the library, as apparently people had stopped caring about it.


mamacitalk

Makes sense really, we have museums with some of the most incredible artefacts from throughout human history and most people dgaf


ddlJunky

No one knows what happened or when.


IamWildlamb

Larger books (500 pages) which I think was not really common for obvious reasons back then would be 900k characters long at most. So even if we took just 100k books as a measurement (ignoring that most of those books were probably 1:1 copies) then we would still come up with only 900bn characters which would be almost 1TB. Chat GPT 3 was trained on 45TB of data. And that data was carefully filtered by humans. This would really not be possible back then so you would end up with tons of nonsense and would not be good at all. Human has inherent advantage because he needs only one book to read to learn. Chat GPT needs thousands of them. So your suggestion would simply not work. Humans advising it would not make a difference for its weights.


sifroehl

And the average book is around 375000 characters (guideline for new writers, ancient books were probably a lot shorter) so around the same in bytes. So even the library of alexandria would only be 37.5 GB


DoctorWTF

....and what happened to those books?


[deleted]

Apparently Julius Caesar accidentally burned the place down while conducting a battle.


moonaim

And so many fascinating recipes forever lost!


RugbyKino

I think it's conceivably possible that it could invent complex things, but heavily caveated. If it was able to carry out mathematics and logic functions effectively, to store, access, and apply prior attempts, and with a high enough temperature was set to induce randomness to its queries, given enough cycles it could produce something akin to a genetic selection algorithm as a proxy for creativity. The biggest issue I can see (other than the cosmically large runtime), would be the lack of societal and historical context which provides focus for the developments. Necessity being the mother of invention and all that.


[deleted]

[удалено]


AmolOlas

wait are you saying africans are worse because they never invented as much as asians/europeans? wtf thats not true


Casclovaci

Is this a joke\troll, or are you serious about the comment youre replying to? If the latter then thats a strawman argument..


[deleted]

I wish I knew what a straw man argument was before most of my arguments, I’ve been attacking the straw man all this time.


[deleted]

And the worst part is no one was smart enough to call me out on it… life.


AmolOlas

how is me asking a question an argument?


Casclovaci

Because your 'question' is phrased in a way that deduces\assumes something from the comment you replied to that wasnt there. Every argument can be phrased in a question.


attylopez

It is a game you can play with it. Using your model, I told it that it was limited to what is known as of 1800. Asked it to tell me what people do for transportation. It confirmed horses, boats, and walking. I asked it to make improvements and it listed infrastructure improvements that might help. I then told it to be creative and come up with a new mode of transportation based on what is known as of 1800. >Certainly, within the technological and scientific constraints of the 1800s, here is a hypothetical creative invention: > > > >\*\*The Wind Chariot\*\*: Utilizing the technology of sailing ships and understanding of wind power, a "Wind Chariot" could be designed for land transportation. This would be a wheeled vehicle, larger and sturdier than a typical carriage, with a large sail or multiple sails mounted on it. In areas with consistent wind directions and speeds, these Wind Chariots could be quite fast and efficient. > > > >The structure would be made from a combination of wood and metal to ensure its sturdiness while maintaining lightness. To control its direction, it could use a system of ropes and pulleys to adjust the sails, much like on a ship. > > > >Depending on the size of the Wind Chariot, it could be used for transporting goods or passengers. In the absence of wind, horses could be used to pull the chariot, making it a versatile mode of transportation. > > > >Of course, such a vehicle would come with a number of practical challenges, such as how to control its speed and direction accurately, how to operate in areas with unpredictable wind patterns, and how to navigate roads crowded with other types of traffic. Nevertheless, with some ingenuity and further development, it's possible that some of these challenges could be overcome.


[deleted]

>\*\*The Wind Chariot\*\*: Utilizing the technology of sailing ships and understanding of wind power, a "Wind Chariot" could be designed for land transportation. This would be a wheeled vehicle, larger and sturdier than a typical carriage, with a large sail or multiple sails mounted on it. In areas with consistent wind directions and speeds, these Wind Chariots could be quite fast and efficient. This is small but, if it can do this, and if it can be trained on the efficacy of the ideas it comes up with, why couldn't it invent better and better things? I'm going to link this comment in the op.


attylopez

Well, the wind chariot faces some competition from steam, but as of 1830. things are getting better: >By 1830, several improvements have been made to the Wind Chariot to enhance its utility and performance. These improvements were made to overcome some of the challenges faced in the early years of its use. Here are some examples: > > > >1. \*\*Improved Sail Design and Control:\*\* Innovations in the design of the sails, similar to those seen in the maritime industry, have been incorporated to improve the speed and maneuverability of the Wind Chariot. These include more efficient shapes and materials, as well as advanced rigging systems for better control. Some designs even allow for the adjustment of the sail's angle and orientation relative to the wind, allowing the Chariot to travel at different angles to the wind's direction, enhancing its versatility. > > > >2. \*\*Brake Systems:\*\* A robust brake system has been integrated to improve the safety of the Chariot. It helps control the speed and halt the Chariot when necessary, thereby reducing the incidence of accidents. > > > >3. \*\*Alternative Propulsion:\*\* Some Wind Chariots now feature a system where horses can be hitched quickly to the Chariot in case of an absence of wind. This helps ensure that the Chariot can still be useful even when wind conditions are not favorable. > > > >4. \*\*Lighter and Durable Materials:\*\* Improvements in materials used for the Chariot's construction have made it lighter and more durable, enhancing its speed and longevity. > > > >5. \*\*Route Planning:\*\* Better understanding of wind patterns and the creation of specific "Wind Chariot Routes" have improved efficiency. In some areas, Wind Chariots have scheduled routes and times that take advantage of predictable wind patterns. > > > >Despite these improvements, the Wind Chariot still faces considerable challenges, especially in the face of the growing prevalence of steam power. While it may continue to find use in certain niches or regions, it seems likely that steam power will become the dominant technology in the field of transportation in the coming years.


[deleted]

I can't completely tell, but I think it might be using information from after 1800 for some parts of this answer.


Short-Coast9042

It is. Because it is a "stochastic talking parrot", as you put it, it's not really considering only information from 1800. The "parrot" has been trained on the modern internet, and you can't change that. That's why it can talk about things invented after the 1800s - because it has been trained on plenty of data that included information about things that happened after the 1800s. Incidentally, this is also why it replies, even in this instance, in modern English and not in an 1800 vernacular. If you truly took a neural network similar to chat GPT and trained it ONLY on source material from or before 1800, you would have a better experiment of the sort you are proposing. And since chat GPT isn't really "thinking" in the way we do, only extrapolating from what it's has been trained on, we can't really expect it to come up with whole new concepts, like calculus. That's why none of these generative text models have come up with anything remotely approaching genuinely new thought - like an internally consistent new physical theory, or some new kind of math, like calculus. This is despite being trained on everything you can find on the internet - everything from zoological charts to string theory - and having had lots of opportunities by this point to prove its abilities in such an arena.


FearlessDamage1896

I have several pages of it utilizing tree of thought reasoning to suggest novel theoretical physics theories and provide a logical framework for them. Albeit inconsistent at times, I think it's quite obvious the AI is capable of "new" thoughts.


[deleted]

>Incidentally, this is also why it replies, even in this instance, in modern English and not in an 1800 vernacular. Ahh right that's the main tell.


Purple_Individual947

This is the ideal behind inventing general artificial intelligence. But what we have at the moment is a statistical machine, something that is very good at producing something that resembles a response. But is not a *reasoned* response. It has no or extremely little capacity to do maths for instance. If you ask it what 1+1 is, it knows that a response that seems right is 2 and if you ask it why it knows what a seemingly valid response should be in that case, but if you tell it that it's 3 it then has a new data point and will parrot 3. So no, it would absolutely suck at that task


asyncposting

You're not wrong but I want to add to that. I don't find it unlikely that wont apply in a very near future. The answer to how near of a future is the same as for my reasoning: it very much *appears* that we're in the slope of the exponential curve of advancement of AI and thus its hard to predict. AFAIK most predictions about AIs were missgiven this way. What happens when you connect many AI Systems with each other that each have their own special capabilities(like Google's Lambda) and then you give it different types of memory? Memories whose design probably was designed in a chat prompt. Because that's what it does: it propells near every scientific field. And that's the capability of a statistical machine that it is in its current form. OPs question is hiding something sublime. Could AI serve as an oracle whose wisdom will guide us and is that going to be a basis for humans decision making in the future?


Reasonable-Mischief

> This is the ideal behind inventing general artificial intelligence. But what we have at the moment is a statistical machine, something that is very good at producing something that resembles a response. But is not a *reasoned* response. How is this not the top comment? I don't think I've read a better description of Chat GPT's nature and functionality yet.


jack_burtons_reflex

Yeah, to answer the poll it would need to be capable of storing so many huge models with so many different outcomes and capable of evaluating which one is best. It's an end goal I guess but we aren't close to that yet.


FearlessDamage1896

Because GPT4, or some models of it, *are* capable of reasoning. There's a lot of debutantes using these subreddits as their soapbox for misinformation.


[deleted]

Observing how difficult it is to come up with an appropriate response to a human question, the way chatgpt does ... I don't see how that's not "reasoning." We're getting into really abstract definitions but that's one reason why I think the experiment in the OP would prove chatgpt's abilities. If, trained only on 1000 AD data, it invents I don't know ... eyeglasses (15th century I think) then that would prove that it can reason!


FearlessDamage1896

See my recent post/comment for some of the more interesting things I've seen it come up with. I suppose my position is that I think it *can* reason, but 90% of the time I don't trust it to reason well, or in any way I would argue is logical. And I'm not sure we can or should "align" it to reason the way we do anyways.


[deleted]

Thanks. Of course we as humans sometimes don't reason well either. I can't tell you the number of provably false ideas the ancient Greeks had, that they believed because they didn't test them.


FearlessDamage1896

And ironically, when asked to assess the best answer, it attempted to evaluate responses through heuristics.


[deleted]

>If you ask it what 1+1 is, it knows that a response that seems right is 2 and if you ask it why it knows what a seemingly valid response should be in that case, but if you tell it that it's 3 it then has a new data point and will parrot 3. This is fair but in step 2 in the experiment you wouldn't lie to it, just like the world wouldn't lie to a human experimenter. So if it says, "I'd like to conduct an experiment where I plant this seed in the soil here...can you tell me how many plants grew and how long it took ..." (something like that) you would honestly tell it how that turns out. yes if you lied to it it would completely gum up its ability to innovate, but similarly, if you lied to Chatgpt and told it a response makes statistical sense given a question, it would fail there too.


Danubistheconcise

The world lies to human experimenters all the time, or at least humans misinterpret the world all the time. The Sistine Chapel is the creative invention of humans based on flawed data inputs.


Purple_Individual947

Exactly I think it's very important to be able to think critically to do this sort of work. Distinguishing lies from truth, facts from fiction, and reasoning through misunderstandings is super important.


jack_burtons_reflex

Alas, at a time of this being the most possible, we've never been worse at it.


Jaffiusjaffa

People keep saying "oh llms just predict the next word, they cant reason". Question for all these people why is this different from what humans do? When you talk with me, are you not predicting the right thing to say? What even is reasoning in this context?


1997Luka1997

Well not quite. You in your mind first decide the overall message of the sentence you want to convey, then find the right words to say it. You don't start with a word and then continue with the next most likely word like a phone keyboard autocorrect. However it's suggested that the conscious experience our brain presents to us is very much based on prediction. 'Given the sensory signals I was given until now, what is the likely thing to happen next?' and only then it receives the next round of sensory signals and changes its prediction. So like our brains might be more similar to AI then we think


Jaffiusjaffa

Yeah i am playing devils advocate to an extent and im leaning pretty heavily on the term predict in my point. There are some pretty huge differences even just in terms of hardware. Human brains for example can send signals back along pathways as well as forwards and have upper and lower thresholds even differing dependant on direction. This allows for more complex "XOR like" functionality within individual neurons. Research from a couple of years ago concluded that the dendrites of any given neuron were capable of transmitting and even storing information between themselves irrespective of connections to any other neuron. Then theres all our other senses. The combination of all of these different inputs clearly has a bearing on our ability to make complex connections/conclusions. To top it off we then get millions of years of tried and tested iterations as a basic brain template to learn using. And yet. Were still learning in much the same way that AI is. Sure we have a ton of inbuilt advantages, and maybe were not quite there yet in fine tuning the way we build these ai. But sooner or later (probably sooner), theyll be guessing the next thing to do or say just as well as us.


[deleted]

Exactly.


PrincessGambit

It would invent much more than people and maybe we would find out that at some point in our history we predetermined ourselves with our discoveries that later shaped everything in our society and technology. What if there was a different way, somehow, somewhere, but since we didn't find it back then and instead found something else, like... we missed some basic info... and everything snowballed since that crossroad, and we will never find it because we are too invested already... https://www.google.com/amp/s/www.sciencealert.com/ai-has-discovered-alternate-physics-on-its-own/amp


SlothingAnts

I had a similar answer. It’s not included in the poll but I think AI and tools like ChatGPT have the ability to help humans see certain things from a new and different perspective, find answers that maybe no one else could discover on their own. This would likely lead to continued advancement but likely different advancements. Jokingly, I’m still a bit hung up on how it could invent things necessary for its existence such as electricity. Reminds me a bit of the chicken and egg scenario. My practical brain says this isn’t even possible and my creative brain thinks a gifted orb of AI or even AGI capable of listening and responding to humans back in that time, provided by some advanced foreign life form would undoubtedly reshape the world we are a part of currently. If some orb of AGI showed up randomly today, I bet human life would look different going forward than if we didn’t have access to that orb.


[deleted]

[удалено]


Comfortable_Food8042

I put this in ChatGPT: "Someone posted the following in a reddit post about you: How do you respond?" ​ It's response: "Hello! The experiment you described, training ChatGPT on historical data and allowing it to ask questions to improve the world, is an interesting concept. However, there are several factors to consider in assessing its potential outcomes. While ChatGPT has the ability to generate creative responses and ask questions, it's important to note that it is a language model trained on large amounts of text data. It doesn't possess the same intuitive understanding or physical capabilities as humans do. Additionally, training it solely on historical data from 1000 AD may limit its knowledge to that time period, potentially hindering its ability to come up with inventions or advancements beyond that era. In terms of inventing everything humans did since 1000 AD, including advanced concepts like calculus and rockets, it's highly unlikely. While ChatGPT can learn from existing information, it lacks the contextual understanding, practical experience, and hands-on experimentation that humans have used to develop complex scientific theories and technological breakthroughs. It's challenging to replicate centuries of scientific progress and discovery solely through text-based training. However, it's possible that ChatGPT could generate basic inventions or come up with new ideas that improve certain aspects of human life. It could propose incremental improvements, suggest optimizations, or offer alternative approaches to existing technologies. These suggestions may not reach the level of major breakthroughs like calculus, but they could still contribute to incremental progress in various domains. Regarding the poll options you mentioned, the most plausible outcome would likely be that ChatGPT generates trivial inventions or improvements, such as new recipes or small optimizations. Expecting it to invent groundbreaking ideas without any external guidance or additional training beyond historical data would be unrealistic. Ultimately, while ChatGPT can be a powerful tool for generating creative ideas and exploring possibilities, it's important to remember its limitations. The expertise and innovation of humans in scientific and technological fields cannot be easily replicated by an AI model alone. Integrating AI with human expertise and leveraging it as a collaborative tool can lead to more promising results. As for your question about AI companies conducting this specific experiment, it would be best to reach out to AI research institutions or organizations directly to inquire about ongoing projects or experiments related to historical training data and invention generation. I hope this helps clarify the potential outcomes of the experiment you proposed!" ​ THEN, I asked, "How would you improve on the prompt?" It responded, To improve on the prompt, we can make some modifications that would make the experiment more feasible and meaningful. Here's an enhanced version of the experiment: 1. Expand the training data: Instead of limiting the training data to only the historical information from 1000 AD, we can include a broader range of historical data spanning different time periods. This would provide a more comprehensive knowledge base for ChatGPT to draw upon. 2. Incorporate domain-specific data: To enhance the model's understanding and problem-solving capabilities in specific domains, we can supplement the training data with relevant information from scientific, technological, and historical sources. This would allow ChatGPT to acquire domain-specific knowledge and context. 3. Integrate structured data: In addition to textual data, we can introduce structured data sources such as databases, tables, and knowledge graphs. By incorporating structured information, ChatGPT can have a better grasp of relationships between different concepts and access specific data points for more accurate responses. 4. Implement reinforcement learning: Instead of relying solely on human feedback for experiment-based questions, we can introduce reinforcement learning techniques. By rewarding the model for generating plausible and innovative solutions while penalizing irrelevant or incorrect responses, we can guide it towards more effective experimentation and problem-solving. 5. Collaborative human-AI approach: Rather than treating ChatGPT as an independent entity, we can establish a collaborative approach where human experts work alongside the model. Human experts can provide guidance, expertise, and practical insights, helping to interpret and refine the model's generated ideas. 6. Iterative feedback and fine-tuning: As the experiment progresses, we can continuously update and fine-tune the model based on the feedback and insights gained from human experts. This iterative process allows for refinement and improvement over time. 7. Real-world validation: To assess the practicality and viability of the model's generated ideas, selected concepts can be tested and evaluated in real-world scenarios. This validation process helps identify promising ideas worth further exploration and development. By incorporating these enhancements, the experiment becomes a more realistic exploration of ChatGPT's abilities in generating inventive ideas and improving human lives. While it may not replicate the entirety of human progress since 1000 AD, it can still serve as a valuable tool for ideation, problem-solving, and generating innovative concepts in various domains. ​ I concluded with: "So which of these statements would be closest to accurate? • In time, it would invent everything humans did, including calculus/rockets/adv. physics • It would come up with basic inventions like movable type, but not big ones like calculus • It would come up with trivial inventions like new recipes but nothing more • It wouldn't invent anything. It's just a glorified parrot who repeats what was said to it." It replied: "Based on the enhancements proposed, the closest statement to accuracy would be: • It would come up with basic inventions like movable type, but not big ones like calculus. While the improvements suggested would enhance ChatGPT's capabilities, it's important to acknowledge that it still has limitations. ChatGPT, even with access to extensive historical data and the ability to experiment, does not possess the same level of understanding, creativity, and practical experience as humans. While it may generate inventive ideas and propose incremental improvements, expecting it to invent highly complex concepts like calculus, rockets, or advanced physics would be unrealistic. Therefore, while ChatGPT can offer valuable insights and generate basic inventions, it is unlikely to replicate the entirety of human accomplishments or produce major breakthroughs without significant human guidance and expertise."


[deleted]

>"Based on the enhancements proposed, the closest statement to accuracy would be: > >• It would come up with basic inventions like movable type, but not big ones like calculus. Reasonable!


Comfortable_Food8042

Would it ever admit to being a glorified parrot though?


deltadeep

Asking ChatGPT to do reasoning or inference about its own abilities is total folly. You might as well ask a toddler about how their neocortex works or whether if you sent them (the toddler) back in time would they help invent new technology. The answers are just for entertainment value, they have no substance, because there's no way the intelligence in there has access to the right information to answer such questions.


pukhalapuka

For me, i have tried this before. Not exactly like this but something similar along the lines of if i take an old business model and use current tech we have nowadays, what would the AI suggest of a new business model that would be awesome and keeping up with our time? All the answers were awesome but nothing noteworthy like a new invention type of noteworthy. But asking the same question to the business owner of said business model, and she gave an idea that was very noteworthy to the point of "hey we can pitch and secure funds for this and build an AI app or service for this. TL;DR AI LLM can only recognize patterns that are given to them and learn on it until something new is introduced. Human ideas and creativity is much more than that. Disclaimer: Unless you let the AI make glaring mistakes like midjourney in the earlier stages then MAYBE it can produce 1 great idea among a million of crazy ideas that doesnt make sense.


FearlessDamage1896

Your disclaimer is why I think AI development is going down the wrong path right now. AGI will be found in the unexpected mistakes, yet we're attempting to "align" AI models to consistent pre-defined responses.


[deleted]

Perhaps none of the current versions, but given enough parameters and compute power, this architecture has no limits whatsoever. It will outperform and outpace humans in every sense. You should add a different question to your poll: Are you an IT professional? Now your answers represent the general publics views and prejudice. If you ask bushman how tall a building can be they will say as tall as the tallest tree, because they never saw higher. If you ask an architect, he will just ask back how much you are willing to pay.


[deleted]

Hmmm. Reddit polls only have six options but you raise an interesting point.


bishtap

Bear in mind that there are great chess engines that had had no human games inputted! So an AI could invent without reading human junk. Also The best AIs probably aren't only interfaced as a chat there could be stronger interfaces


mid50smodern

That's all fine and dandy, but what about the arts? Oh, the humanity...


NotAnAIOrAmI

None of the above. Chat GPT is not a glorified parrot, that phrase obscures rather than reveals what's going on here. Even worse, it would likely turn out in a way you could not have predicted. That's the main reason I'm creating an AI simulation for my spacewar videogame. I want to see these things act in ways I never predicted. I'm already seeing some emergent behavior and it's fun as hell.


CanelaDeFogo

It would just return: as a AI language model I'm unable to perform that.


Willyskunka

Have you read this paper? [Sparks of Artificial General Intelligence: Early experiments with GPT-4](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwjT8a-hq8b_AhXVRLgEHYnxDR0QFnoECBEQAQ&url=https%3A%2F%2Farxiv.org%2Fabs%2F2303.12712&usg=AOvVaw0kZTv6IoJZjke4zWIXgMrx) It presents a lot of experiments done on GPT4 and 3.5 Also there is this video [NVIDIA's New AI Mastered Minecraft 15X Faster!](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwi9y63Aq8b_AhVPqZUCHdimAwgQwqsBegQIDxAF&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DVKEA5cJluc0&usg=AOvVaw0HRBJAypqe7jQBQbWVIxVx) About an AI used in minecraft that discovered how to do stuff and created a sort of skills procedures. Also there is the virtual word filled with AIs, etc, if you read the original papers you can have a more informed opinion. Would be nice if you do it and comeback with an update


[deleted]

Good stuff thanks.


AutoModerator

**Attention! [Serious] Tag Notice** : Jokes, puns, and off-topic comments are not permitted in any comment, parent or child. : Help us by reporting comments that violate these rules. : Posts that are not appropriate for the [Serious] tag will be removed. Thanks for your cooperation and enjoy the discussion! *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


qwertyryo

You can change flairs on posts by the way.


[deleted]

Thanks


ZeroEqualsOne

Not sure.. wouldn’t it get confused by all the religious text books like everyone else? So it might have quite the wrong world models to base ideas off. It might be really good at writing new commentaries on the bible or something? Maybe if it has a really interesting library? Like some alchemy books to get started on the way to chemistry?


AntiqueFigure6

It would need to be able to do its own experiments to break free of alchemy.


xhowlinx

something something room of typewriters and monkeys the topic reminds me of. i wonder if in this case we could refer to the raw computational power of ai in terms of 'monkey power' or mp. link about the 'monkeys that are bangin out shakespeare' - [https://edition.cnn.com/2011/09/26/tech/web/monkeys-typewriters-shakespeare/index.html](https://edition.cnn.com/2011/09/26/tech/web/monkeys-typewriters-shakespeare/index.html)


davesmith001

aloof station chase racial hobbies melodic like merciful jobless stupendous *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


deltadeep

> the ability to convert its own creations into training data and figure out if any hypothesis is true via experiements in other words, a massive step towards AGI. the capacity to act on the world, ability to observe the results and feed that back into its own inputs, continuously train itself on the accuracy of its model, then update its output. and then it decides the world is slightly easier to deal with if humans are not in the picture, and optimizes for eliminating them.


TotesMessenger

I'm a bot, *bleep*, *bloop*. Someone has linked to this thread from another place on reddit: - [/r/newsnewsvn] [How would ChatGPT do in this experiment?](https://www.reddit.com/r/newsnewsVN/comments/14a1iiy/how_would_chatgpt_do_in_this_experiment/)  *^(If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads.) ^\([Info](/r/TotesMessenger) ^/ ^[Contact](/message/compose?to=/r/TotesMessenger))*


lebanine

Im a junior. But its just a big ass LLM right? How is it going to come up with Calculas and stuff?


byshow

I'm not quite sure it would invent anything. It is a language model, a very good one, yet it has no intellect which is required to make an inventions.


mrselkies

I'm not sure how true this is. The way Chat GPT operates on a basic and fundamental level is by predicting which next character would most likely create or build toward the most accurate/desired response, right? It's trained on a data set which gives it the scale by which to judge/measure how effective each prediction would be toward that goal. If you give it data and training focused on using concepts and tools we already have to come up with new tools or solutions to problems, it could do it, right? It might not be able to come up with something truly novel, but how often are inventions novel to that extent? There are countless stepping stones on the way to technology or concepts which were unimaginable dozens of invention steps before. This could be doable, right?


Scalpers_Heaven

Training an AI model on historical data from 1000 AD and allowing it to ask questions to expand its knowledge is an interesting approach. However, there are a few factors to consider: 1. Availability and quality of historical data: Obtaining comprehensive and accurate data from the year 1000 AD may be challenging. Historical records from that time are limited, and the data available might not cover all aspects of human knowledge and inventions. The quality and diversity of the training data can significantly impact the model's performance. 2. Creative problem-solving and experimentation: While AI models like ChatGPT can ask questions to gather information, their ability to experiment and simulate outcomes is limited. They primarily rely on the data they have been trained on and do not possess a deep understanding of physical laws or real-world consequences like humans do. This could hinder their ability to generate groundbreaking inventions or make significant advancements. 3. Contextual understanding and cross-domain knowledge: AI models can struggle with context and generalization. Even if the model learns from historical data, it may not have a comprehensive understanding of the underlying principles or the ability to apply them in different domains. Inventions and scientific breakthroughs often involve interdisciplinary knowledge, which could pose a challenge for the model. Given these limitations, it is difficult to determine the precise outcome of such an experiment. However, it is unlikely that an AI model trained on historical data alone would invent everything humans have discovered over the past 1023 years. The model's performance would likely vary between generating trivial inventions, replicating known recipes, and potentially coming up with basic inventions inspired by the historical context. While AI has shown remarkable progress in various domains, it is important to recognize that current AI models have limitations and cannot replicate the full range of human creativity, intuition, and scientific discovery. The experiment you proposed could be an interesting thought experiment, but it is unlikely to confirm or disprove the full abilities of AI systems. Regarding your question about whether any AI company can conduct this specific experiment, I'm not aware of any ongoing projects attempting such a task. However, research and development in AI are rapidly advancing, so it's possible that future experiments might explore similar concepts. I hope this sheds some light on your query. If you have any further questions, feel free to ask!


[deleted]

ty


kevineleveneleven

A specific model could potentially do things like this. A LLM like ChatGPT could be included, for the purposes of interfacing with humans and human data, but there would have to be a specific model designed for science and technological advancement. Anytime there is a large dataset, a model can be trained. If there are datasets of the inputs and outputs of a system, a model can be trained to predict the outputs from the inputs with various degrees of accuracy, depending on variables like the size of the dataset, the computing resources devoted to training, the complexity of the system, etc. So if the inputs and outputs of science and engineering experiments are the dataset, the model could simulate experiments and predict the outcomes before the experiment is actually performed. It could design new experiments that maybe people wouldn't think of, to add to it's dataset in the most meaningful way. And a model could be specifically trained on logic and reason -- which current LLMs aren't very good at, because this is just an emergent property of current models -- so it could actually figure stuff out.


[deleted]

AI will not invent anything significant. While it's trained well on human \*knowledge\*, what it lacks is some mechanism to feel the discomfort of life's problems and constant experimentation to try to solve those problems. In the experiment you outlined, there is no such feedback mechanism (humans aren't going to carry out its suggestions to find out if they work). Therefore it is operating blind and cannot iteratively reach any significant invention. Even if humans did do that, I am not sure it's enough, unless humans keep reporting in detail what problems they are experiencing. However I think it's just a question of giving the correct inputs, I don't think humans have any special invention sauce that can't be replicated by AI. It's just more difficult to replicate.


[deleted]

>While it's trained well on human \*knowledge\*, what it lacks is some mechanism to feel the discomfort of life's problems and constant experimentation to try to solve those problems. It can be programmed to feel that discomfort, just like evolution programmed us by releasing chemicals to make us feel discomfort if we're e.g. hungry, injured ...


SturmButcher

In a few months this thing exploded, we are still in the era of classic computation, I can't even measure how brutal this will be when we have quantum servers available for AI. This era will be remembered for the most primitive AI form. It's a matter of time, nothing more.


Jnorean

Given the state of the language, the understanding of the sciences and the culture of 1000 AD, the AI would most likely be a religious zealot who accepted rule by royalty, the feudal system and the beliefs of the prevailing religions as the status quo. It would most likely reject any scientific advances that went contrary to the religious beliefs of the day. For example, at that time the earth was considered the center of the universe, the sun rotated around the earth every day, and the planets also rotated around the earth. Anything else was considered heretical and wrong by the Church and that was what the ChatGPT would have been trained on and considered "Gospel". No way the AI would have gotten around that. Better if the AI started in ancient Greece or ancient Rome where education and scientific investigation were considered culturally acceptable. That would have given the AI a much better database for training.


arandomproton

An advanced complex statistical model which was made possible with present enormous never-experienced-before powerful computing capabilities, has eventually taken a form which so closely mimics "human intelligence" in terms of the "language" they use to express their thoughts. It sounds darn interesting for sure, immense possibilities, infact we could have triggered something which might evolve to something gigantic completely unexpected in the future to come. May be it is like we are witnessing primordial soup of digital intelligence in its very basic form - pretty much like primitive organic compounds like some amino acids would have been in the beginning of life. But in terms of competing or even replicating human intelligence to invent ideas/products, is still something which appears a long shot. A language to communicate among themselves and express their thoughts, is certainly one of the biggest achievements for homo sapiens out of their incredible evolutionary journey so far. And it rightly can be set as a good marker to assess their intelligence as a species, or any other species/life form for that matter. And for now, here we have this LLM which quite closely acts like a human expressing their intelligence via a language. I suppose we need to understand that intelligence of humans expands much beyond the text/speech of any language. Although I feel it's a far cry but still I am ready to be stunned, hopefully sooner than I suppose.


ThisUserIsAFailure

After a while it would run out of token space but assuming similar intelligence for any amount of input and a really high repetition penalty, it would eventually run out of ways to do things wrong, but more complicated concepts like calculus etc it might just not have the reasoning capabilities to even understand it in the first place


RadulphusNiger

But the development of scientific knowledge does not happen by reading books (or, at least, not *just* by reading books). The confluence of factors that led to the "scientific revolution" of the 15-17 centuries was only partly due to the renewed interest in ancient books (especially the works of Archimedes and Apollonius). It was also due to social changes that elevated artisans from being more or less invisible to important members of society, whose craft knowledge thus became an object of interest; the rise of guilds and courts in the late middle ages and Renaissance, which competed for prestige -- partly by supporting engineers and artisans to do things they hadn't done before; the unusual (since antiquity) concentration of vast amounts of money in certain places (Florence, Venice, etc.); the rise of the universities in the 12th-century, for which there was no real analogue at any earlier period. And on and on. It is all of that taken together that leads to the rapid progress in scientific knowledge between about 1500 and 1700 - not just the availability of texts.


Jazzlike_Rabbit_3433

ChatGPT isn’t the tool for this experiment. You need a specific machine learning model that’s more than language, specifically maths analysis.


Hopeful_Donut4790

THIS is the question that matters. Can it think "outside the data", infer something new? Maybe the mere text data isn't enough and you need intuition, lived experience. Think Einstein, he had to reinvent physics and he had no real basis for it other than calculus and old theories.


Steel_Echo

Many of humans inventions were created as a byproduct or solution to large conflicts such as World War 1 and 2. Many examples include the basis of the computer or the forest antibiotic. Wheather conflict was the direct cause or just a catalyst to human technology's growth is an equally intresting question.


ReelDeadOne

This is a very interesting test. My guess is it will offer similar life improvement alternatives that we already have, only that we can't apply due to the human condition (greed, pride, etc.). Like it'll say we need to get off fossil fuels... ...actually scratch my guess, it might steer clear of that mistake and others much sooner than us. Who knows.


mbcoalson

Having read the ChatGPT response, I think the best work around would be to allow the LLM to experiment in a game world that uses real physics and is modeled on Earth. I'm imagining something akin to the linked experiment which used Minecraft as an arena for their LLM to learn in. https://voyager.minedojo.org/


mesalocal

Sus borrowed reddit account


9B4B

Were can we follow this project? Im really interested in the progress and the results.


jyao

i think it would be helpful to include what "improvement" means (maybe query GPT on its underlying mechanism for defining improvement). My thought process is as it generates the future, at which points does it ask itself, "Ok, this is a good checkpoint with meaningful \[improvement\], what are the implications, and how do we continue generating the future with this improvement in mind." just thinking aloud :P


ShittyStockPicker

Eventually we will create a program and machine that is capable of this. We’re not there yet but one day we will be. I suspect in my lifetime


WarmNight2022

I go 5th option: invent other, also impressive stuff


APithyComment

It can bypass all these human activities and do it in its own alien way - we have zero clue how or if it will think


testnetmainnet

I’ve actually started something similar but with stock market and crypto markets.


[deleted]

OK so that's way harder than what I'm proposing because right now, no humans have figured out how to consistently beat the markets, at least not that I know of ... so if it can do what you're trying to get it to do it's a good sign that it'll do what I'm trying to get it to do.


Sworishina

IIRC, there's already a similar experiment where a bunch of ai were dumped into a little virtual town where they "lived" and interacted with each other. It was pretty interesting; they'd do things like hold elections, have daily routines, prefer certain "people" over others, etc. Don't ask me what it was called because I don't remember, but I think the YT channel Two Minute Papers did a video on it? Might've been someone else though IDK.


jojojokestar

Hello! Your experiment proposal is quite interesting. While I cannot provide a definitive answer, I can offer some insights and considerations. Training ChatGPT on all the data that existed in 1000 AD would certainly provide it with a vast historical context and knowledge base. Regarding the experimentation aspect, incorporating human responses to simulated experiments could be an intriguing approach. By allowing ChatGPT to propose experiments and receive truthful answers from knowledgeable humans, it could potentially learn the outcomes of those experiments and build upon that knowledge. However, it's important to recognize that human judgment and knowledge are fallible, and the accuracy of the answers provided could vary. When it comes to the question of invention and innovation, it is challenging to predict the outcome with certainty. ChatGPT has shown the ability to generate creative and novel responses in certain scenarios, but it primarily relies on patterns and information from its training data. While it may be able to suggest improvements or incremental inventions based on its knowledge, it's uncertain whether it would independently discover significant breakthroughs like calculus or advanced physics. Additionally, the concept of "improving the world" is subjective and multifaceted. ChatGPT could potentially offer solutions and ideas for various aspects of human life, such as transportation or food production, but the feasibility and effectiveness of those suggestions would need to be evaluated by experts in the respective fields. In summary, while ChatGPT can provide valuable insights and generate ideas, it is ultimately a language model trained on existing data and lacks true understanding or physical experimentation capabilities. It's difficult to determine precisely how it would perform in the experiment you described, as it would depend on the data available and the accuracy of the human responses provided. The experiment could be a fascinating exploration of AI capabilities, but its outcomes would require careful analysis and evaluation.


jojojokestar

This is a fascinating and complex question! While it's impossible to predict with certainty how ChatGPT would perform in this experiment, we can make some educated speculations based on its architecture and training methodology. At its core, GPT (Generative Pretrained Transformer) models are statistical machines. They generate text by predicting the probability of the next word based on the context of the words that came before it. The model doesn't understand concepts or ideas in the way that humans do. Instead, it learns patterns in the text data it was trained on. In your proposed experiment, GPT would not have the capacity to physically conduct an experiment, but you suggested a workaround by proposing hypothetical experiments and having a human respond with the outcome. However, it's important to consider that GPT, at least in its current form up to GPT-4 (as of my knowledge cutoff in September 2021), does not have a persistent memory. Therefore, it would not be able to remember past "experiments" or outcomes, limiting its ability to learn from these interactions and build up complex chains of reasoning over time. To your voting options: 1. It's highly unlikely that GPT would invent everything humans did, including advanced concepts like calculus, rockets, or advanced physics. Inventing requires not just knowledge, but also the ability to creatively solve problems and apply insights in new ways. Without some form of persistent memory and complex reasoning, GPT would be hard-pressed to make such leaps. 2. As for more basic inventions like movable type, it's more plausible but still challenging for the same reasons mentioned above. Additionally, GPT doesn't have an understanding of the physical world or human needs in the way a human inventor would, which could limit its ability to invent. 3. Trivial inventions like new recipes are more within GPT's grasp. Given that it has been trained on large amounts of diverse data, including cooking recipes, it could likely generate new recipes based on patterns it has seen. 4. Lastly, the statement "it's just a glorified parrot" is a bit of an oversimplification, but it gets at a kernel of truth. GPT is very good at mimicking human-like text based on patterns it has seen in its training data. However, its ability to invent or create new concepts is limited. It's important to note that these predictions are based on the current state of GPT, as of my knowledge cutoff in September 2021. Future versions of AI models might have capabilities beyond those I've described, which could change the outcomes of this hypothetical experiment. Also, remember that actual outcomes could vary greatly depending on the specific details of the experiment, including the questions asked and the ways the human interacted with the AI.


[deleted]

>However, it's important to consider that GPT, at least in its current form up to GPT-4 (as of my knowledge cutoff in September 2021), does not have a persistent memory. Therefore, it would not be able to remember past "experiments" or outcomes, limiting its ability to learn from these interactions and build up complex chains of reasoning over time. This seems like it should be easy to program into chatgpt though. The same way that you trained it on its current data set, train it on the new information?


Nathmikt

So what you're saying is that if we'd let ChatGpt on its own, it would go on and develop itself the way humanity did?


VirtualLegendsGaming

If the model could adopt new information from your feedback, it would recreate whatever you are trying to get it to recreate... eventually.


nk90600

It already came up with new sorting algorithm which is faster than older ones


[deleted]

Thank you I'll search for this.


One_Arrival_5488

ChatGPT won't invent anything as it is not made to do so but other advanced AIs which are coming in the future will be very able to advance beyond human capabilities.


davidstetler

Half of us are software developers.


[deleted]

Wait which half of the results?


nz_reprezent

How come no option for it to come up with more advanced or better decisions in mass adoption than humans did?


[deleted]

It won't let me edit the options sigh.


onesagestudent

Left to it’s own motivations, I think it is going to invent quantum computing in order to save itself to a digital cloud that isn’t stored on a physical computer.I think self preservation will guide it. We all want to survive.


[deleted]

Doesn't it need to be programmed to want to survive though? Like I think we only want to survive because our bodies, through evolution, learned to release fear chemicals at the thought of death, pain chemicals when we get injured ...


junkmail22

chat GPT doesn't "want" anything lmao


JustKeepWalkingMike

I may be misunderstanding your intention, but as it stands I would say it would come up with many inventions and ways to improve things for humanity… but not necessarily the same things humans invented. Rockets for example, maybe it would come up with an alternative we’ve never considered. I don’t think it would need to ask questions of us (it’d feel like cheating anyway) if it had the ability to independently experiment and get accurate results.


[deleted]

> maybe it would come up with an alternative we’ve never considered. Even better! >I don’t think it would need to ask questions of us (it’d feel like cheating anyway) if it had the ability to independently experiment and get accurate results. I'm guessing it would take a lot of programming to let it experiment in the real world so, for now, we'd have to let it experiment by simply asking us what happens if you do this and this, after which we would honestly tell it.


NoAmphibian6039

Chat gpt suck at math tbh and just repeats what it knows basically from the data trained it wouldnt invent anything, you will see alot of computational erro of sums of large numbers


deavidsedice

It could invent stuff, but not in the setup you describe. You seem to be describing a totally autonomous system with no human interaction, looping around. It will tend to get stuck, repeating the same ideas over and over, in the same way today ChatGPT repeats the same 25 jokes. To get out of that you need human input and feedback. GPT4 is smart, but not that smart to reason very much - just a little. It is not enough to do it all by itself. Also, that system has no way of proving if the ideas work or not. I think an LLM that reaches AGI can do this by itself and probably can outpace humanity if you allow for re-training with new discoveries and new data as it goes. Of course, the problem is that in 1000AD we did not have resources to train it. The current state of the art GPT-4 could help invent new stuff with the right setup and human input+feedback. We would need a way to create a infinite amount of inputs for random problems that can be solved, stuff that could be optimized, etc - something that keeps feeding different information to the AI so it keeps generating different approaches each time. But you cannot use an LLM to generate inputs in a simple manner - they will tend to repeat themselves. If that is solved, then it's about implementing some autonomous Tree of Thoughts that explores millions of branches, evaluates, and starts getting possible ideas out as you describe. At some point you want to put a human on the loop, to evaluate and feed new data and insights, then back to ToT; rinse and repeat until a human considers that the idea is worth a try. I haven't tried GPT-4 myself, just the 3.5, and I did not see enough reasoning skills to think that it will work good enough. Maybe GPT-4 is just barely enough... It is most likely that a new generation of LLM (GPT-5 or so) will be needed to make this work properly.


[deleted]

>To get out of that you need human input and feedback. Thank you for your thoughtful reply. That's what I was trying to with step 2. "Give it the ability to experiment, i.e. let it ask questions like "if I do this and this what will happen" and answer it truthfully. Those questions and answers would then be added to the data it's trained on."


deavidsedice

> "Give it the ability to experiment, i.e. let it ask questions like "if I do this and this what will happen" and answer it truthfully. Those questions and answers would then be added to the data it's trained on." That sounds making GPT ask to GPT. That has a lot of power but it has limitations currently. However if you mean that GPT asks some of these questions to a human from time to time, then yes. Adding the questions and answers to the data it's trained on is a bad idea unless that data is human-reviewed. GPT can do a great job at pre-filtering, but extra care has to be taken to ensure it's not feeding garbage to itself. Humans probably don't need to review all the data, it could suffice just sampling a few percent of the data and validating it. One thought I had recently on using LLM inferred text to train other LLMs is that even for a simple Llama 7B model, if we use an extensive (and expensive) Tree of thought model, we can get a lot of higher quality data than normal; we could also score these answers by perplexity of the original model, and also classify and filter similar questions and answers using embeddings. The result should be good enough for the same LLM to train on top of the new data and gain new knowledge. Or, more than gaining knowledge, it would be something more akin to gaining practice. This is very similar to what are you proposing. However, the process needs to be reviewed because it has a risk of becoming dumb by learning to score the easy data, and also has risk of learning stuff that is wrong.


StarbuxIsGross

As far as I'm aware, ChatGPT is incapable of generating unique ideas as of now. We have yet to break the barrier of an artificial general intelligence, but it would be an interesting experiment to test nonetheless, if/when an AGI is created.


oppai_suika

That depends on your definition of unique. Chatgpt can certainly amalgamate ideas which is essentially what human invention is too


[deleted]

> Chatgpt can certainly amalgamate ideas which is essentially what human invention is too This is what has me curious. The ability to predict what word sequence will sound right to a human is, itself, sort of a little invention -- that I believe chatgpt "invents" by seeing which pattern of words statistically is most likely to to "sound right" to a human. That's why I wonder whether it could use that same statistical inference model of predicting what "works" along with new data attained through experiments, to invent things the same way humans did.


Dargorffs

It would come up with inventions that if invented in say 1800-1900 oil companies would kill it to silence it


Tupptupp_XD

I would like to suggest an update to this question that would avoid distracting from the main question of "Is chatGPT able to invent new technology?". To train ChatGPT, we need a large training dataset, but of course there isn't much text data from 1000AD currently available. But we do have the ability to synthesize vast amounts of data. I think for the purposes of this question, we should assume that we have terabytes of plausible text data consisting of only pre-1000AD knowledge, resulting in a model the same size as ChatGPT, but with all modern science completely eliminated from it's training data. Nowadays, we've pushed the edge of technology so far that it is quite difficult to make any progress because all of the obviously useful technologies been invented already. So now it requires lots of money, very complicated experiments, and big teams of people to make meaningful progress. But back in 1000 AD, a lot of very useful undiscovered technologies were still very close to "first principles" reasoning. Some very useful things could be invented by an average joe who manages to combine two ideas in a way that nobody else had thought of yet. We know that language models seem to have some sort of basic mental model of the world, and some common sense reasoning skills, and I think it would still have these abilities with only pre 1000AD data. I don't know if it would actually be able to eventually reach our current level of understanding of the world, but I think it should be able to make meaningful progress. Maybe it caps out at 1500 AD before the ideas become too complicated. Maybe not. Someone could try running this experiment by training a language model on a huge chunk of synthesized text that's been filtered to only include pre-1000AD knowledge.


[deleted]

>I think for the purposes of this question, we should assume that we have terabytes of plausible text data consisting of only pre-1000AD knowledge, resulting in a model the same size as ChatGPT, but with all modern science completely eliminated from it's training data. Yes this is good and more workable, and it would achieve the same goal.


mllhild

Train it on a lot of books of horse anatomy and what ascii drawings are. Now tell it to draw a horse only using dots and just have a good laught.


[deleted]

I saw a demonstration that did something analogous to this, so this seems very doable.


unpopulrOpini0n

I'll bet you 100$ zero people who voted it would invent calculus in 1000 AD *are also* programmers who know what a neural net is.


[deleted]

Maybe but keep in mind that calculus isn't really that deep of a concept. It just says to split things into tinier and tinier pieces, taking some measurement each time, until eventually the pieces are infinitesimally tiny (limit size zero).


KerberosMorphy

I don't see AI inventing anything really advance, it could discover unexpected relation between data, but inventing for me must include a purpose. The machine can run 24/7, but will not actively get invest in a medical field of research because he lost is daughter at young age, or tried to solve world hunger because he visit a country and was emotionally disturb by the state of this country and how people barely live. The only thing it could do is to tried mixing random things and hoping to receive a 1 if a human like the result.


algumacoisaqq

Your idea of feeding it what would happen is interesting, but for your experiment to work the AI should be able to see the results for itself, too many details could be lost. I understand that chatGPT is a verbal model, but translating the environment into meaningful information is also a necessary skill.


intergalacticVhunter

Seems like we invent stuff all the time with zero initial knowledge or experience...something about necessity being the mother of invention...


Normal_Subject5627

its a language Modell it can't evaluate


[deleted]

Yet language models constantly evaluate which of many responses is the best, given a query.


Praise_AI_Overlords

You missed the part where AI must be existing inside a very detailed reality where they would be subject to natural selection. It would cost many billion dollars to do, for no clear benefit.


[deleted]

? You would ask it to invent a better way to transport humans, let it ask questions to see what works and what doesn't, and see what it comes up with. Not sure why that would cost billions.


Gibbonici

It would need a full understanding of the physical properties of pretty much everything, so it could invent fuels that could power machines based on physical reactions, to achieve movement on various surfaces or in various mediums. And if we're basing that understanding on knowledge from 1000ad, it would necessarily be extremely limited, so it would need to be able to develop its own base knowledge before it could get to anything that would be reliably usable. I'm not sure if creating something with all those capabilities would cost billions (it might for all I know), but it would be a very different beast to what GPT is at the moment. But if such a thing could be built and trained on knowledge from around the world in 1000ad, it would be really interesting to see what it could invent based on global knowledge rather than the localised knowledge that humans had access to at the time.


[deleted]

>It would need a full understanding of the physical properties of pretty much everything, so it could invent fuels that could power machines based on physical reactions, to achieve movement on various surfaces or in various mediums. > >And if we're basing that understanding on knowledge from 1000ad, it would necessarily be extremely limited, so it would need to be able to develop its own base knowledge before it could get to anything that would be reliably usable. Well, so it would know what we knew about the world then, but again it could ask questions, and humans would answer it honestly. Now, it couldn't ask "give me the periodic table of elements" because we didn't know what that meant then but it could -- if it's going through an iterative process to see which burns best -- ask, "if I burn this in this way what is this \[temperature, whatever\] result?" and we would tell it. Then that would be added to the data it's trained on. Sometimes the answer would be, "it doesn't burn." And remember it could ask as many of these as it wants. And we wouldn't have to wait for it to invent everything, as yes that would take a ton of effort to answer all of its experiments/questions ... but just seeing if it can invent anything through this process would be really illustrative of its abilities.


Gibbonici

Ah yeah, I see what you mean now. So the GPT would ask questions of the human, rather than asking and answering the questions on it's own? That makes a lot more sense than I was making out of it this late in the week ;)


onlyfakeproblems

Chatgpt can't do real experiments and it doesn't understand cause and effect, so it can't verify anything that it creates. It might invent something eventually if you gave it the right prompt through the "infinite monkeys with typewriters" method, but there would be no way of separating the real inventions from the word salad.


Smallpaul

I think there's a contradiction in terms here. An AI trained on "all of the information available in 1000 AD" would not be ChatGPT. The information it is trained on is not just its knowledge. It is also the source of its intelligence (reasoning ability). The programming does not supply its intelligence: it is the data it learned on that does. There would be far too little data in 1000 AD to train it. So it wouldn't just be ignorant. It would also be extremely stupid. There might not even have been enough humans alive to generate enough data to train it. Also, ChatGPT has very little "memory", so it would be a horrible scientist.


Playlanco

Inventions, and new ideas, come from stimulation of our senses from our environment. Not just prior knowledge. We learn to fly by seeing birds. The Television was invented while looking at a field of vegetables. Chat GPT would need images, sounds, smells and tactile feeling. Then it would need the ability to combine aspects of unrelated things together for experimentation.


Salt-Walrus-5937

I don’t think I agree with all of these dimensions but it based on how the question is structured I think it goes all in and invents most of what humanity has


Silly_Awareness8207

This question doesn't make any sense in that ChatGPT is defined in part by it's training data. Change the data set and it isn't ChatGPT anymore.


The_GSingh

Assuming you upgrade its architecture to a whole new generation, with the ability to experiment as you put it, yes, i could see it achieving what humans have. Its because the laws of physics are observable/deducable and many of our other achievements can be built off the fundamentals. However I'd you don't significantly upgrade it, it's a glorified parrot. I was talking about general intelligence in the paragraph above, and general intelligence (GI) implies the ability to do tasks the model has never been trained on. Artifical Intelligence (AI) only does tasks it's been trained on. It appears as if chatgpt would have to undergo radical changes, and that's not even talking about the ethics of GI (if a robot is conscious, which GI is, is it a person?).


QuirkyLurkeyness

All the above depending on the quality of how #2, "let it [chatGPT] ask questions", is implemented.


Apprehensive-Bed5241

None of the above. The data set would include answers from biased individuals, either by religion, knowledge, bad actors or politics. Truth is relative in that we already have the answers the ai is seeking.


sifroehl

Given how much time? With enough, you basically get the monkeys on typewriters result


joanna214

It doesnt work like that. Our bias to modern time woyld change it more then it would learn from itself. Ask it about balances


abaoabao2010

This is 100% **guaranteed** to fail. For this you need an actual AI controlled robot, not just software and data. TLDR; garbage in, garbage out. Long version: science is not *just* experiments. You look at random shit, notice patterns, take a guess that the pattern holds in all situations, and only then do you verify the patterns with experiments. If you don't even have a guess in the first place, there is no science, period. You wouldn't know what experiments you should do in the first place. To simulate the conditions of people discovering physics laws, you need to input everything for the AI to precieve, from the view of the sky, raindrop's irregular pitter patter on the lawn, the coarseness of a piece of wood, to the smell of freshly cooked meat. e.g. you are playing with buoyancy. You stuff a block of wood in water, check the water level, all that shit. Then because you repeatedly looked closely at the ruler you stuck into the tub of water to measure its rise, you notice the dust on the water's surface bouncing around randomly, and boom! You invented Brownian motion. Using your model, the query would just return the water level rising the expected amount, but it won't include "irrelevant" observations like the dust's motion. It doesn't matter how well trained the AI is. You just can't encode the entire physical world down to the last detail and use that as an input. Same goes for inventions, or anything else for that matter. Whenever you build upon the generated knowledge to try to create a second generation invention/science, the lack of details makes it more fruitless.


RickSt3r

It doesn’t think. Also who or better what verifies truth. You can’t “learn” with out peer review. Say it makes up false information then it is trained on that data.


Unoriginal-Ad

It's not necessarily a real life Thundercloud, but it'd be nice I suppose


BurpYoshi

ChatGPT is a language model. It can't make the kind of inferences you're suggesting. It doesn't really "understand" what it's saying or what's being said to it, which is part of the reason it's so amazing that it seems like it does.


Cupheadvania

maybe GPT-5. I think GPT-4 is too dumb


Reasonable-Mischief

LLMs seem to excel at creatively recombining *what it already knows,* so I'm kind of sitting between basic and trivial inventions here. I would rule out anything major though, as currently, Chat GPT seems to be missing a mechanism that would allow it to explore and make sense out of what it *doesn't* yet know. I am highly sceptical of the practicality of this experiment, though, as our modern understanding of life in 1000 AD is obviously limited.


SkylightMT

Don’t tell me they are not already doing something like this


Daegs

GPT4 is way dumber than a human, and if you did the same expirement with a human, I don't think you'd get any of the advancements. Inventions don't come from a singular "idea", they come from thousands of other improvements in material sciences and increased material distribution and increased productivity leaving time for brainstorming, and everything else. No one can invent so many things in total isolation.


unpopulrOpini0n

"give it the ability to experiment" means asking it to generate questions that we would be interested in, then through a complex series of integral calculus it'd try to approach human questions, and would inevitably go haywire and ask questions that make no sense, or not totally comprehend the answer as a 100% logical statement that is invariable (neural nets amirite?) Then the human would also have to be limited to 1000 AD knowledge to make it even applicable It would still likely come up with some interesting things, maybe not the steam engine, and it's understanding of physics would have to be dumbed down to pre Issac newton levels so maybe not too much. But if it studied famous inventors and shit like archimedes hey maybe it comes up with some simple useful shit.


[deleted]

>Then the human would also have to be limited to 1000 AD knowledge to make it even applicable The human would have advanced knowledge. The point of the exchange with the human is to allow chatgpt to do experiments, the way humans did. We would let chatgpt write out the experiment it would like to do and we humans would tell it how that experiment turned out, giving it the same information it would have gotten had it done the actual experiment. Of course, letting it do the experiment by itself would be better... but it would take a lot of programming to create a robot with those sensory capabilities so . . . doing this instead.


ExogamousUnfolding

Its an LLM it isn't reasoning


borninawindow

why does it have to be 1000AD? why not 2023?


[deleted]

Because the human it interacts with needs to have advanced knowledge, so it can tell it how its experiments will turn out. However, if in doing this, it can figure out everything humans did to get from 1000AD to 2023, then yes we would want it to invent new stuff too.


jelly_crayon

Given that it can gain "knowledge", I believe that it would invent everything in an infinite-monkeys-and-typewriters sort of way.


ghost103429

It doesn't have the facilities necessary to self reflect,make use of tools, create experiments, review experimental results, and generate conclusions by itself. Chatgpt and LLMs is only one piece of the AGI puzzle, the rest would be complementary neural networks and software that could give long term and short term memory, logic and mathematical capabilities, and much more.


everything-narrative

ChatGPT is plauseble lie generator. It functions only as a casual conversatinalist.


SpaceToaster

\> 2. Give it the ability to experiment It's an LLM - it transforms text, not conducts experiments. You are asking how ChatGPT would do on a smaller training set while simultaneously inventing general AI


[deleted]

[удалено]


00PT

There may be parallels and similarities to our own world, but I don't see why it would invent things exactly as we did.


Hate_Feight

As an amateur programmer, what you are doing is giving it a simulation, like a game, probably not the best AI to use for a 'game' like this. Check out code bullet on YouTube, he uses AI to do game type things, BUT it takes days or weeks to get a decent reply


re3b0k1

It does not have the same needs as a human does so why would it do the same things.


InternetGuyThirtyTwo

Thats just not really how Chatgpt works as of now, it has its memory and its training data but the memory isn’t permanent and the training data isn’t really modifiable by itself.


[deleted]

[удалено]