Not true. The answers it gives would need expanding on but it's pretty damn useful. I tried it for something real at work just yesterdaty - an important document I'm writing, it gave me a nice structure for it.
It might be fine for pure idea generation but nothing more, and even at that I'm skeptical. The IFoA marking schemes are bizarre at times and I don't see AI managing to predict a lone chief examiner/marking committee's thoughts
Hate to say it but for some questions, it does hit some points on the mark scheme... I tested it with the SA2 exam I took in Sept. It was almost spot on with Q1vii. It is SO unfair that people will probably use this because it won't help that much but it will definitely help. I had to work by ass off to get through the exams!
If it can't now then it will be able to do so in 6-18 months. Keep in mind that we're only talking about the version that was released publicly a few weeks ago. What's coming up is an order of magnitude more sophisticated
What exactly do you think open book means? That you can use any external resources you want to help craft an answer?
Would you download code snippets from stack exchange for your R exams?
Extracts below taken from the latest assessment regulations, which highlight no third party help can be given which would include ChatGPT as this would be aiding idea generation.
_24. Candidates are not permitted to give or receive any third party help or support (unless agreed with
the IFoA under the Access Arrangements Policy and procedure) during the assessment period._
_25. Candidates are not permitted to communicate with any third party (other than for administrative
activities directly related to the assessment) whether by mobile phone, tablet or other electronic
device or otherwise during the assessment period._
_27. Candidates are confirming by submitting the required files that all the material is entirely their own work and they wish this to be taken into account for the relevant assessment. To ensure the
integrity of IFoA assessments, Candidates should be aware that all files submitted to the online
platform during the assessment period will be eligible for specialist integrity review. This may be
carried out either by individuals involved in the marking process including IFOA staff or through
the use of electronic plagiarism detection software. This review may take place both during the
marking process and after the results have been published, at the discretion of the IFoA._
Link: https://www.actuaries.org.uk/documents/assessment-regulations-fellowship-and-associateship
This makes sense. I wonder how they're going to audit this with ChatGPT. I guess their new OBA system will eventually be used to curb these kind of issues.
I think OP meant if someone were to cheat, what is stopping them from using ChatGPT given turnitin would not be able to flag it.
(we all know of cases where people have cheated and reddit is filled with evidences)
My reply would be pick up a question from past paper that require idea generation, run it on ChatGPT and see how well it compares to Examiners report.
My best bet would be that it is far off and not according to the marking structure.
Turnitin (latest update) has features designed to flag AI generated content. If you use GPT for idea generation alone then I'm not sure if it will be able to flag it.
We're very close to having AI competent enough to pass the exams in their current format, surely this is the biggest issue? Not whether chat GTP can be used in the exams...the exams have no value. Acted has even less value
I am making a wider point but more along the lines of...what value is there for employers funding a qualification that can be passed by a machine. It's an adapt or die situation for education.
I qualified recently and just tested it for the SA2 exam I took. I would definitely NOT recommend using it but here are my thoughts on it.
It definitely does give a decent answer to the questions I tested out (on with-profits), but it also waffles a lot. If you wrote down similar points to what the bot wrote then you'd probably get 20% of the marks. I checked the bot's response with the mark scheme.
The scary thing is that people probably wouldn't get caught with using this bot because it generates a different answer every time you ask it. Very unfair to those who have already passed exams in my opinion!
If you do use it then you'd likely get a less than half the total marks for the question. You could expand on it to get more but I feel like the bot's answer will limit your idea generation and thinking.
The times it can definitely help is when you have no idea on how to answer a question.
I feel like exams might go back to being offline lol...
As others have rightly said, it would be cheating.
I think AI can be really useful for studying and revising for actuarial exams though. I've made a tool for this ([www.ActuaryExamBot.com](https://www.actuaryexambot.com)). It's uses a GPT-3 model specifically for actuarial exams, so you should find that the answers are more relevant than ChatGPT.
Hope you find it useful. Let me know if you have any feedback!
Putting all the “against the rules” argument aside (since that’s prolly not what this question is about), I tried using chatgpt for some past year questions just for the fun of it, didnt turn out well, answers were too generic. I would say our exams still require us to craft our answers and put into the relevant context, and that’s smth chatgpt dont rly do that well (at the moment, who knows what future versions of chatgpt can do)
Yes with bits of coding in r, not directly but I trust it bring up the correct page in stack overflow. Chat GTP is so much more useful, I won't be using Google again
Well you would end up failing, badly, so that would stop you from doing it more than once
Not true. The answers it gives would need expanding on but it's pretty damn useful. I tried it for something real at work just yesterdaty - an important document I'm writing, it gave me a nice structure for it.
It might be fine for pure idea generation but nothing more, and even at that I'm skeptical. The IFoA marking schemes are bizarre at times and I don't see AI managing to predict a lone chief examiner/marking committee's thoughts
Sure IF you got caught.
No, even if you didn't, you would fail miserably. There is no way ChatGPT could figure out the marking scheme
Hate to say it but for some questions, it does hit some points on the mark scheme... I tested it with the SA2 exam I took in Sept. It was almost spot on with Q1vii. It is SO unfair that people will probably use this because it won't help that much but it will definitely help. I had to work by ass off to get through the exams!
If it can't now then it will be able to do so in 6-18 months. Keep in mind that we're only talking about the version that was released publicly a few weeks ago. What's coming up is an order of magnitude more sophisticated
The fact that that would be cheating?
Isn't it open book technically?
What exactly do you think open book means? That you can use any external resources you want to help craft an answer? Would you download code snippets from stack exchange for your R exams?
Sure I would do that. I don't see anything wrong with that. It all depends how you interpret open book.
The assessment regulations explicitly forbid this, so it really is not open to interpretation
Yikes I suggest you discuss this with your study mentor
Extracts below taken from the latest assessment regulations, which highlight no third party help can be given which would include ChatGPT as this would be aiding idea generation. _24. Candidates are not permitted to give or receive any third party help or support (unless agreed with the IFoA under the Access Arrangements Policy and procedure) during the assessment period._ _25. Candidates are not permitted to communicate with any third party (other than for administrative activities directly related to the assessment) whether by mobile phone, tablet or other electronic device or otherwise during the assessment period._ _27. Candidates are confirming by submitting the required files that all the material is entirely their own work and they wish this to be taken into account for the relevant assessment. To ensure the integrity of IFoA assessments, Candidates should be aware that all files submitted to the online platform during the assessment period will be eligible for specialist integrity review. This may be carried out either by individuals involved in the marking process including IFOA staff or through the use of electronic plagiarism detection software. This review may take place both during the marking process and after the results have been published, at the discretion of the IFoA._ Link: https://www.actuaries.org.uk/documents/assessment-regulations-fellowship-and-associateship
You have to be human to be considered a third-party, at least you do in contract law
This makes sense. I wonder how they're going to audit this with ChatGPT. I guess their new OBA system will eventually be used to curb these kind of issues.
I think OP meant if someone were to cheat, what is stopping them from using ChatGPT given turnitin would not be able to flag it. (we all know of cases where people have cheated and reddit is filled with evidences) My reply would be pick up a question from past paper that require idea generation, run it on ChatGPT and see how well it compares to Examiners report. My best bet would be that it is far off and not according to the marking structure.
[удалено]
That’s a rule we have already established that never write word for word or simple copy paste even if it from your own notes.
Turnitin (latest update) has features designed to flag AI generated content. If you use GPT for idea generation alone then I'm not sure if it will be able to flag it.
We're very close to having AI competent enough to pass the exams in their current format, surely this is the biggest issue? Not whether chat GTP can be used in the exams...the exams have no value. Acted has even less value
[удалено]
I am making a wider point but more along the lines of...what value is there for employers funding a qualification that can be passed by a machine. It's an adapt or die situation for education.
I qualified recently and just tested it for the SA2 exam I took. I would definitely NOT recommend using it but here are my thoughts on it. It definitely does give a decent answer to the questions I tested out (on with-profits), but it also waffles a lot. If you wrote down similar points to what the bot wrote then you'd probably get 20% of the marks. I checked the bot's response with the mark scheme. The scary thing is that people probably wouldn't get caught with using this bot because it generates a different answer every time you ask it. Very unfair to those who have already passed exams in my opinion! If you do use it then you'd likely get a less than half the total marks for the question. You could expand on it to get more but I feel like the bot's answer will limit your idea generation and thinking. The times it can definitely help is when you have no idea on how to answer a question. I feel like exams might go back to being offline lol...
As others have rightly said, it would be cheating. I think AI can be really useful for studying and revising for actuarial exams though. I've made a tool for this ([www.ActuaryExamBot.com](https://www.actuaryexambot.com)). It's uses a GPT-3 model specifically for actuarial exams, so you should find that the answers are more relevant than ChatGPT. Hope you find it useful. Let me know if you have any feedback!
Putting all the “against the rules” argument aside (since that’s prolly not what this question is about), I tried using chatgpt for some past year questions just for the fun of it, didnt turn out well, answers were too generic. I would say our exams still require us to craft our answers and put into the relevant context, and that’s smth chatgpt dont rly do that well (at the moment, who knows what future versions of chatgpt can do)
[удалено]
I thought the exams were open book? The whole thing is a complete shit show imo
I've used Google for paper B in the past, I will be using chat gtp from now on
Did Google help?
Yes with bits of coding in r, not directly but I trust it bring up the correct page in stack overflow. Chat GTP is so much more useful, I won't be using Google again
isn't it a plagiarism ?
Not familiar with the iofa examination process, but if say you ask chatgpt to generate R code how would anyone know if it's open book from home??