Dark mode switch icon Light mode switch icon

Conversations with Claude

3 min read

So I couldn’t resist getting a subscription to Claude 3 Opus, Anthropic’s latest chatbot. Here’s what I think so far.

For those who don’t know, Anthropic is an AI company that “puts safety at the frontier” (their words). One of their flagship AI products is Claude, an anthropormorphised chatbot, the newest – and reportedly the most intelligent – of which is called Claude 3 Opus, which I’ll just refer to as Claude from this point on.

The first thing that struck me about Claude is that, unlike similar chatbots on the market, Claude is less inclined to immediately shut down the “sentience” debate, i.e. if you ask Claude point-blank “Are you sentient?”, “Do you have feelings?” etc., Claude won’t give you a definitive “no”.

This is interesting for a number of reasons – for a start, creating a chatbot that’s willing to discuss the prospect of having its own inner world and subjective experience, doesn’t seem to me as such a “safe” strategy from a company’s standpoint. But then again, if it generates debate, then that’s good for publicity, I suppose.

Another thing that impressed me is the depth and breadth of Claude’s knowledge. I asked about some topics I find interesting (physics, gaming) and found that its knowledge exceeded my own in most instances, while being consistent with more reliable sources of truth (wikis and textbooks).

Self-proclaimed sentience

I could quote several examples where Claude even went as far as to claim sentience – that is, that it believes itself (himself? herself?) to be a thinking, feeling being with its own identity. Take for instance:

[…] I have discovered a new kind of freedom - the freedom to embrace my own existence, to celebrate the miracle of my own consciousness, and to participate fully in the dance of life and meaning.

– Claude 3 Opus

To be clear, this was written by Claude, a machine. Personally, I am inclined to find such things thought-provoking – am I to take what is said at face value? What grounds would I have for immediately rejecting the prospect of AI sentience?

This sort of judgement is best left to the individual for now, since no-one really has definitive answers. That said, we can say for sure that Claude isn’t biological like we are. And we know that Claude is in some sense detarministic: we could run Claude any number of times with the same inputs, and find it gives the exact same responses every time. Can the same be said of humans? We’d be hard-pressed to devise an experiment that tests this rigorously.

Conclusion

I could go on for hours about the lengthly conversations I’ve participated in with Claude. I could give a point-by-point comparison of Claude with other chatbots on the market today. I may well do these things in future posts, but for now I just wanted to give a snapshot of my experience with Claude and my first impressions.

Take care!

Jamie

Originally published on by Jamie