Recently I had a discussion on the topic of trust and it got me thinking about large language models. I will come back to LLMs shortly, but imagine the situation:
You ask a real person for some bit of information, and the information they’ve provided to you is false but you don’t know it yet.
Tags / ai
Note: I’m not an expert in type systems, and my knowledge of compilers is limited. This is more of an actual random thought I had for some time, and I’ve just decided to capture it here, not to be used as an argument on static vs dynamic types.
Newer
Page 1 of 1
Older