I mean thinking about students as actors of pure bad faith, a student could easily copy and paste any instructions given to them into a LLM and bypass any required training data. Even if an AI company respects the license and the source does not end up in the training set, model knowledge tends to be generalizable to a given area. The only way I could see making a language that is intentionally obtuse to write in (brainfuck or really any other esolang seems to work), but that fails at being a good introductory programming language.
Open source (classical) & use-restriction is kinda at-odds, so it doesn't really make sense. That's not to say people don't do it - you can restrict the software however you want, it's just not open source (maybe look at some equivalent of Ethical Source?)
It's legally cogent, but it's hard to see how enforcement would be possible.
I mean thinking about students as actors of pure bad faith, a student could easily copy and paste any instructions given to them into a LLM and bypass any required training data. Even if an AI company respects the license and the source does not end up in the training set, model knowledge tends to be generalizable to a given area. The only way I could see making a language that is intentionally obtuse to write in (brainfuck or really any other esolang seems to work), but that fails at being a good introductory programming language.
Yeah I think that's a good point. Making something hard for AI to learn is hard for humans too.
My argument was like "how would you even prove that it trained on it bro?" and I think that's kind of a hard thing to do as well.
Open source (classical) & use-restriction is kinda at-odds, so it doesn't really make sense. That's not to say people don't do it - you can restrict the software however you want, it's just not open source (maybe look at some equivalent of Ethical Source?)