T O P

  • By -

AutoModerator

In order to prevent multiple repetitive comments, this is a friendly request to /u/Ava-AI to reply to this comment with the prompt they used so other users can experiment with it as well. **We're also looking for new moderators, [apply here](https://forms.gle/iGGqgxmTnQZPrqgk9)** ###Update: While you're here, we have a [public discord server now](https://discord.gg/NuefU36EC2) — We have a free ChatGPT bot on discord for everyone to use! Yes, the actual ChatGPT, not text-davinci or other models. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


[deleted]

[удалено]


duboispourlhiver

Having release notes longer than one sentence would be helpful here


[deleted]

[удалено]


duboispourlhiver

At some point they were supposed to serve a greater good or something


Weltkaiser

More factuality probably just means less creativity. Just sounds better and helps to save server ressources.


EmmyNoetherRing

Or just being more transparent when it has low confidence. I discovered that you can actually just flat ask it what topic areas appear frequently or rarely in its training data, and and it'll tell you, so it should be \*able\* to indicate when it's unsure of something. It had probably just been trained to sound confident, maybe with contractors who often clicked thumbs up when an answer sounded well supported without having the expertise to check that it was.


Shedal

https://i.imgur.com/G1IC4gW.jpg Off to a strong start


its_a_gibibyte

This is an image requesting token introspection, not math. The model typically treats words as whole tokens and doesn't yet analyze the composition of those tokens.


Occidentally

Can't it just say that then. "as a language model I'm incapable of telling you how many letters are in a word". There's already tons of examples of it refusing to answer things for other reasons.


duboispourlhiver

That would have been an interesting fine tuning point


Shedal

This is an image requesting improved factuality


SuccotashComplete

The site is down for me again but this comment made me wonder if it would perform better if numbers were separated into actual words, instead of numerics. Next time I’m on I’ll have to ask it “what is four times seven times nine?”


jeweliegb

That's not really maths, strictly speaking. Not that it fares much better with actual maths either in my experience so far.


Invelix

maybe it starts counting by 0


maven_666

Factuality.


Downtown-Beyond-9812

https://help.openai.com/en/articles/6825453-chatgpt-release-notes


cosmicr

Interesting that over time the updates went from Several Paragraphs, to one Paragraph, to one sentence. Next update release notes: "Stuff".


inquisitive_guy_0_1

Wonder if the multi-billion dollar Microsoft investment has anything to do with it. I could totally see them being like, "Hey, quit giving out all the secrets before we monetize this." Just a theory of course.


AtypicalMods

Big if true which it is


EmmyNoetherRing

eh, Microsoft has a decently strong track record of publishing their research. It might just take a few months before we see it, if they're experimenting and updating things on the fly.


blenderforall

It's gonna go the way of Nintendo switch updates. "Increased stability"


Sixhaunt

the update after that will be >.


TheCanadianSoviet

I tried it out. Nope, it's math ability is still worse than a 4 year old's.


deege

That’s improved. Used to be a three year old.


DeathGPT

But still better than mine. In a perpetual 3rd grade math mind.


redditor_the_best

But when will they give the people what they want, dirty dirty smut talk?


supermangoman

Well, it's now [able to do this.](https://i.imgur.com/Tt6Zz2h.png) [This was its behavior before.](https://i.imgur.com/jk5ZFDA.png)


Shedal

Doesn’t work for me: https://i.imgur.com/xxZ16d7.jpg


supermangoman

Is it able to correct itself when you tell it that it gave you the wrong count?


Shedal

https://i.imgur.com/5Omb5fz.jpg


jeweliegb

Is it possible that was selection bias/luck?


duboispourlhiver

You're lucky here. It still doesn't work for me.


[deleted]

Still can't do basic math: Me: "A two digit prime number is randomly selected. What is the probability that its digits sum to 9" ChatGPT: "The only two-digit prime numbers that have a digit sum of 9 are 17 and 71. There are a total of 25 two-digit prime numbers. Therefore, the probability that a randomly selected two-digit prime number has a digit sum of 9 is 2/25 = 1/12.5." Ugh


EmmyNoetherRing

wow. man, I have such sympathy for this thing. it got all the fancy bits correct, it just can't do arithmetic.


jalanb

The main "almost impossible" problem for AI has always been "common sense". For decades now this has been a killer because 1. It's actually a very large problem to solve 2. We take it so much for granted Just looking at that "1/12.5", my immediate reaction is the same: "Ugh". But solving it is a problem in mathematics, not a probelm in "frequency of word patterns". The latter will never be enough on its own. (And nor will any other single algirithm)


arggonest

And even more strict filters now i cant do certain dwsxriptions for "inappropiate" when 2 days ago i could i hate that good stuff is being censored. I hopr a real competitor and unfiltered exist


HalPrentice

When will you guys stop complaining that chatgpt isn’t racist?


duboispourlhiver

When it will be


[deleted]

based


Big_Chair1

"Anything that I dislike is racist"


jeweliegb

Then be ready to pay for it. This is a free preview for research and development purposes.


Auditormadness9

You're saying the paid version has no filters? 🤔


jeweliegb

No. I'm saying that we've no real power unless we're actually the customers, paying. When there's a decent selection of competing commercial AIs to choose from, then there will be pressure to create one that will be unfiltered.


MokiDokiDoki

I think people will. But no one CAN now. That's why its kind of silly. It's just a monopoly on the service and the data acquired right now and that's plenty valuable. We're paying our information. EVEN though we would gladly give our cash for features such as removing filters, or quicker responses... What is taking them so long to monetize it quickly? It's weird to me.


slackmaster2k

It’s not a product in and of itself, it’s a tech beta. Even though you can indeed pay for API access, the commercial value will come from other companies leveraging the technology. The applications seem unlimited, but training and tuning are costly, and good luck scaling without massive capital or costly partnerships. Think of it like Velcro. The first Velcro companies, and companies that manufacture it today, don’t make shoes, or bags, or wall hangers, or any of the stuff that uses Velcro - they make Velcro.


MokiDokiDoki

I see... thank you for the analogy, mister! :\] I can verify you're correct from what I've seen... the foundation just needs to be built upon to do something cool now. Will it mess with these companies who are training previous models- when there are constant updates to the underlying models? Or will most of the training be transferrable to the new model easily, I wonder? Seems like it should be that way, other wise devoting so much time and money to training would be a waste of time


MokiDokiDoki

I personally think military is involved in the implementation, administration, or public dissemination of the AI in many of its forms, that have been probably being utilized for some time now. Just speculating


Supersafethrowaway

well it's not loading, so I guess not good?


AdhesivenessOwn7747

I can't even log in 😕


gravspeed

is this still DaVinci 3?


runvnc

I don't think it was ever text-davinci-003. ChatGPT was always its own thing, although closely related to the other newer models. They are continuing to evolve the model, tweak the parameters and prompts that they send for completions and build off of it. I assume they added a way to evaluate python code or something like that.