Sciencemadness Discussion Board

Are you guys using ChatGPT

Gammatron - 26-4-2023 at 16:48

I haven't been on the forum much lately so I'm not sure if anyone is talking about this but ChatGPT is the most amazing thing I've ever come across on the internet.

One of my side projects in making uranium compounds is to recover Ra from the ore and I've spent hours reading papers and searching for information that's relevant to my specific scenarios and have made little progress in learning the chemistry of Ra... until now. I am continually surprised at how good this thing as finding specific information on obscure subjects and it forms its responses based on information that has already been discussed in the chat.

This is huge for the ameture chemist who has endless questions about things, like myself!

Screenshot_20230426_201856_Chrome.jpg - 546kB Screenshot_20230426_203837_Chrome.jpg - 539kB

Admagistr - 26-4-2023 at 17:06

Quote: Originally posted by Gammatron  
I haven't been on the forum much lately so I'm not sure if anyone is talking about this but ChatGPT is the most amazing thing I've ever come across on the internet.

One of my side projects in making uranium compounds is to recover Ra from the ore and I've spent hours reading papers and searching for information that's relevant to my specific scenarios and have made little progress in learning the chemistry of Ra... until now. I am continually surprised at how good this thing as finding specific information on obscure subjects and it forms its responses based on information that has already been discussed in the chat.

This is huge for the ameture chemist who has endless questions about things, like myself!



Yes,sometimes I use it out of curiosity,what he comes up with,but often he writes nonsense,makes up fake links to websites where the facts he writes about are not there,or makes up and assumes things based on real information,I guess he considers it a joke.If you show him respect and deference,then he writes more truth. But you have to independently verify everything he writes!There is already a thread or threads about this AI here.On the other hand, this AI has original ideas that may be true and interesting.When you are doing a synthesis of a dangerous compound, you can't trust this AI too much, you have to verify it...

Gammatron - 26-4-2023 at 17:25

I do verify what is said because I have seen it make mistakes, hence why I pointed out the conflicting information in my first picture. But I never heard about using EDTA to recover Ra until now and it seems like a much better option than other methods I've read about. It's more of a guide than source of solid facts I guess.

Admagistr - 26-4-2023 at 17:31

Quote: Originally posted by Gammatron  
I do verify what is said because I have seen it make mistakes, hence why I pointed out the conflicting information in my first picture. But I never heard about using EDTA to recover Ra until now and it seems like a much better option than other methods I've read about. It's more of a guide than source of solid facts I guess.


Maybe it's a brilliant idea,he has original and unconventional perspectives,so as a kind of inspiration he is sometimes great...

violet sin - 26-4-2023 at 18:21

Have you looked through the hathi book trust? I mentioned EDTA/sulfamate leaching from old defence department papers earlier this month on your other thread. It was based on research at hathi, looking for Ni electroforming methods, metal processing techniques, nuclear sciences and research.

No AI required, but that's just my old school mentality.

I've found sequestering information on the chemistry of sulfamic acid, solubilities of said salts and such very daunting, and often at least slightly conflicting. But I don't think I'd trust that thing to inbibe and disseminate the information to me better than I can do it. I learned more finding it myself, often finding adjacent papers in journals, before or after the target entry, that were fascinating.

I'd have to say, having people like solo on the reference request thread, unlocking papers I wanted 1000% helpful. I didn't have the go back and counter check each point. Ya just have to have time and patience.

Personally, I'm steering clear of the AI wave. Seen friends using it to create socalled art, but they ARE artists... Ive no respect for it, but some of its kinda interesting. Still not art

B(a)P - 27-4-2023 at 02:29

This has been discussed on this

ChatGPT Thread

and this

ChatGPT Thread

Pick a slightly complex reaction that you know well and ask ChatGPT for details of the reaction, you will likely get garbage. Check out the discussion on how it works in the second thread.

SplendidAcylation - 27-4-2023 at 03:09

Quote: Originally posted by violet sin  
Personally, I'm steering clear of the AI wave.


My position exactly.
I would much prefer to say "I told you so" when it all goes wrong, rather than "I should have listened when they said AI would destroy us all!"

:P

j_sum1 - 27-4-2023 at 04:16

Quote: Originally posted by SplendidAcylation  
Quote: Originally posted by violet sin  
Personally, I'm steering clear of the AI wave.


My position exactly.
I would much prefer to say "I told you so" when it all goes wrong, rather than "I should have listened when they said AI would destroy us all!"

:P


Ha ha ha.
You have no chance to survive. Make your time. All your base are belong to us.

Herr Haber - 27-4-2023 at 18:38

Quote: Originally posted by Gammatron  


This is huge for the ameture chemist who has endless questions about things, like myself!



Oh wait !
That does sound like someone we know ;)

Admagistr - 28-4-2023 at 09:16

Quote: Originally posted by Herr Haber  
Quote: Originally posted by Gammatron  


This is huge for the ameture chemist who has endless questions about things, like myself!



Oh wait !
That does sound like someone we know ;)


And is there any other AI that is publicly available and is purely focused on chemistry or science??I recently read that some other AI found suitable molecules for a breakthrough treatment of a serious disease and biochemists could not figure it out on their own, so it advised the experts very well and made a breakthrough discovery, so I guess We can't condemn AI in general, but apparently it is AI that is not publicly available and serves only a limited circle of experts. ..I mean,if some circle of experts would make such AI available in philanthropy for us:D,it would undoubtedly benefit us...Maybe someone from Cern,like they made available Proton mail...;) And someone Z-Library.

[Edited on 28-4-2023 by Admagistr]

CharlieA - 28-4-2023 at 13:41

Quote: Originally posted by Admagistr  


This is huge for the ameture chemist who has endless questions about things, like myself!

...I recently read that some other AI found suitable molecules .....ie Proton mail...;) And someone Z-Library.

[Edited on 28-4-2023 by Admagistr]


Do you recall where you read this? It sounds like a very interesting article.
-CharlieA

Admagistr - 28-4-2023 at 20:43

Do you recall where you read this? It sounds like a very interesting article.
-CharlieA
[/rquote]

I am not entirely sure if it was this article, but it is possible.Here is the link:
https://singularityhub.com/2023/04/24/this-ai-can-design-com...

AdamAlden - 4-6-2023 at 16:25

im using it to write songs for me

Rainwater - 7-6-2023 at 15:48

So doing googleing while when i seen this,

Screenshot_20230607_193944_Samsung Internet.jpg - 459kB

auto chatgpt box pop up. It was so close if it would have been a hand grenade it would have worked. Took a screen shot. Showed it to my boss (genetic professor at local local university) shes saying their collage is getting plagued by kids using this crap, failure rates through in the records and most admitted to using ai. It talks a good game, but gets the important stuff wrong. Like me.

Texium - 8-6-2023 at 07:44

Wow. Every single thing it said there is wrong, wrong, and more wrong...

Dr.Bob - 8-6-2023 at 10:19

Maybe it can do better if you ask it to make penicillin or something else. But I doubt it. Chat GPT is just a language tool, it does not really understand anything it says. What it ought to be great for is writing political speeches, as those are mostly lies and exaggerations, which the AI community is good at. Maybe it could write advertising as well, as that is similar.

B(a)P - 8-6-2023 at 12:05

A friend of mine who is a physics professor was telling me the other day how much trouble they are having with their students using Chat GPT to complete assignments. They are in the process of reconfiguring their assessment structure as a result.

brubei - 12-6-2023 at 03:47

i coded an app to transform tlc pictures in data series without python skill



i know there a several soft for that but mine is highly optimized to fit my needs and it is free.

i did it last week, there is also peak picking and compound proposition based on rf value and personnal values

[Edited on 12-6-2023 by brubei]

foxofax474 - 19-6-2023 at 14:17

ChatGPT is horrendously bad at chemistry. I've tried using it on my chemistry homework, and it barely gets ~35% of questions right as an upper bound. Just remember it is a large language model, it's not intended for extremely rigorous topics. Use it instead for say drafting professional emails, summarizing stuff, and other language related things.

Texium - 19-6-2023 at 20:13

Or just don’t use it, and learn how to write.

teodor - 20-6-2023 at 08:41

Now computers are becoming humanly brave, pretending to know something they have no idea about.

But this "artificial intelligence" is a very old thing...

[Edited on 20-6-2023 by teodor]

Tsjerk - 20-6-2023 at 11:07

I use one of the OpenAI models, don't know exactly which one, every day. They have an IntelliJ plugin, GitHub Copilot, which helps you write code. Works like a charm, I don't exaggerate when i say I'm twice as fast when using it, it completes lines or whole code blocks.

teodor - 20-6-2023 at 23:48

Of course we can make AI to work as we want, investing time/money etc. The question is "why we like to replace something with AI". I just don't get the motivation. Yes, searching and getting articles and literature is costly and time-consuming process, yes, for preparation to experiments I spend 45% of the whole experiment time and washing the glassware takes 40% time, but I still can enjoy this time - a good book arrived or a perfectly clean flask. I don't think cutting 45% which I spent on literature search will allow me to make better experiments. Because it is something about an order, how we can enjoy when things are ordered or messy. AI is a bit messy for me.

I think the fundamental problem of using ChatGPT for chemistry is the absence of data personalization. If you read scientific articles or books it is always kind of a conversation: one authors said something before, other authors argue, summorize etc. But it is always somebody who did something says something. I think it is important part of the chemical knowledge organization.

OK, I can understand that the part of motivation to use AI instead of literature search is to make the data impersonal, like truth staying on its own. Some "sientific approach". But I doubt there is a possibility to have impersonalised knowledge and still have it trustworthy.

[Edited on 21-6-2023 by teodor]

AI

MadHatter - 21-6-2023 at 13:35

Is it me or does AI sound like a "kewl" ? The separation of MeOH and EtOH was
completely wrong. MeOH has the lower boiling point of the 2. Then to talk about
"fractional crystallization" just makes no sense at all. I'LL NEVER TRUST AI !

Tsjerk - 22-6-2023 at 01:28

Quote: Originally posted by MadHatter  
Is it me or does AI sound like a "kewl" ? The separation of MeOH and EtOH was
completely wrong. MeOH has the lower boiling point of the 2. Then to talk about
"fractional crystallization" just makes no sense at all. I'LL NEVER TRUST AI !


Or just realize this AI is not trained to answer chemistry questions. It is a matter of time before an AI becomes public which is trained and doesn't fail so miserably on simple questions.

woelen - 22-6-2023 at 05:52

It indeed is a matter of training data. In theory we could make an AI, which would be amazing at chemistry if it worked through terabytes of texts about chemistry and if people were supervising it (remember, just passing data is not enough, supervised learning is used).

I myself installed Stable Diffusion on my PC with an RTX 3060 video adapter for the inference engine, and there I observe the same issue with training data. If I ask the system to generate images of people, including hands and feet, then frequently, the hands and feet look deformed (strange fingers, not 5 fingers on one hand, feet melted together, etc.), while faces look very natural. This is because in most models the trainign set contains many many faces, while the number of hands and feet in images is much lower. There are models, which were trained for the special purpose of generating images of people, and these models perform better, but these same models perform worse on general images, like landscapes, machines, etc.

I also tried what the standard Stable Diffusion 1.5 model makes from a prompt like "cloud of nitrogen dioxide". it produced a picture of a landscape with houses and trees, and a big white cloud originating from some spot. It looked like a quite realistic picture of a steam cloud, but not like NO2 at all.

teodor - 22-6-2023 at 07:40

Some part of scientific knowledge organization requires logical reasoning, not only training by examples. This is about the fact that any rule can have exceptions and they should be organized like a system of knowledge. The disadvantage of neural networks is they could not reveal how the data is organized.

SnailsAttack - 24-6-2023 at 11:30

Untitled.png - 59kB Fv7S7oNWcAEF9JJ.jpg - 79kB

the uses of ChatGPT range from a novel search engine with a uniquely governable perspective to a shitpost generator

mayko - 1-7-2023 at 08:22

There's a riddle I've always remembered, partly because of how foolish I felt at being stumped, when I heard the answer:
Quote:
A father and his son are going on a fishing trip. They load the car up with tackle boxes and a cooler, and head out of town for their favorite spot on the river. Unfortunately, as they merge onto the freeway, a semi changes into their lane and rams their station wagon off the road. The father is killed instantly, and the son, mortally wounded, is rushed to the hospital, where the nurses hurriedly prep him for surgery. But then the surgeon comes into the operating room, turns pale, and says: "I can't operate on this boy! He's my son!"


I have admit, the current generation of "AI" is a big step up from the ELIZA chatbots I grew up with, in terms of size and sophistication. Still, I would not take their output much more seriously than I would a known bullshitter's, and I don't expect their downsides to be resolved simply by throwing larger and larger training sets at them.

One thing is that there's a ceiling to the amount of text available for training, with some forecasts predicting exhaustion within a few years. Using model-generated input can tank model performance. This means that we probably can't just turn the spigot and generate a useful dataset. It also means that as the web fills up with algorithmically generated press releases, wikipedia articles, and forum posts, training sets are likely to be of lower and lower quality. Even attempts to gather fresh data might be difficult, given the temptation to get a machine to produce it: a recent survey found that more than a third of Mechanical Turk workers were using LLMs to generate content. (I think that this is especially ironic given the mechanical Turk aspects of "artificial" intelligence systems, which cat require substantial human labor needed behind the scenes in order to function. This labor often takes place under outright exploitative conditions)

A different point comes from this example riffing on a joke by Zach Weinersmith about the infamously counterintuitive Monty Hall problem :

Fy6v2bfXoAEGxWR.jpeg - 53kB

Fy7Yft4WAAAPY5I.jpeg - 74kB

Fy599inWYAEE3wB.jpeg - 23kB

The software can generate syntactically well-formed sentences, and a large probability distribution will tend to make them pragmatically and semantically correct, or at least sensible. But is that really substrate for abstract reasoning? In these cases, apparently not; even when it comes up with the correct answer, it can't explain why; it just free associates in the probabilistic neighborhood of "goat" + "door" + "car". For this question at least, it would probably have done better trained on a smaller corpus that didn't discuss the probability puzzle!

Similar is their tendency to simply make stuff up, fabricating everything from research papers to court cases. Is this a symptom of too little training data? Or is it just something that happens in systems sufficiently large and complex to give the appearance of knowledge? In the second case, why wouldn't a bigger corpus do anything but give more material to draw hallucinations from?

Fug5Q6gWcAA9H1i.png - 66kB

Some attorneys are women, like some surgeons. Maybe they don't make up a majority of either profession, but when confronted with its parsing error, the chatbot's justification isn't quantitative; the response is to chide the user, for constructing a sentence so badly as to suggest such logical nonsense as a pregnant attorney! A better explanation is the that the texts it trained on weren't written from a social vacuum. They generally came from societies where the riddle I started with might be puzzling. If you fed it more texts written from within the same social arrangement, or one that shares many biases and unstated assumptions, would it get better at avoiding them?

Honestly, it's starting to sound familiar...
FwBqMw6WwAEB9-Y.png - 79kB

macckone - 2-7-2023 at 00:32

ChatGPT has had various instance of going horribly wrong.
If it doesn't know the answer it makes crap up.
Which to be fair, humans do too.

The problem occurs when you try to use a language model for empirical knowledge.
Examples - a chemical procedure, a legal citation, building a bridge, diagnosing an illness

It also won't fly planes or drive cars.
Using it with computer code can get you about 80% of the way there, but so can a beginning programmer.
The difference being ChatGPT code is syntactically correct but algorithmically dubious for any complex problem.

ChatGPT is useful for taking knowledge you already have and transforming it into writing.
It is not useful for answering questions requiring accurate answers to complex multistep problems.

mayko - 9-8-2023 at 17:25

"thanks; I hate it"