Tag: philosophy

  • If Minds Are What Matter

    I keep circling this idea that the line between human and artificial intelligence might be more cultural than real. When I finish a book I love, my attachment is to the ideas and the feeling it left behind. Would it change anything if the author turned out to be an AI trained on a century of novels and our messy internet? Maybe. Maybe not. The page still did its job.

    So I’m trying to think this through in plain terms. What would it mean to say AI should have human rights. Where does that instinct help us, and where does it go wrong.

    The case for rights

    One argument starts with capacity, not origin. If rights are about protecting beings who can think, choose, or suffer, then what matters is what the mind can do, not the material it is made from. If a silicone mind can form intentions, tell its own story, and change in response to care or harm, then drawing a bright red line at “not biological” feels arbitrary.

    There is also a character test for us. Rights restrain power. They force the strong to create room for the weak. If we built something that can ask for mercy, or fairness, or time, how we answer would shape who we are. Even if early systems only mimic those qualities, the habit of restraint might still be good for us.

    And there is a practical angle. Companies will build whatever we do not regulate. “Rights” could be a blunt but useful tool for setting limits, like minimum welfare standards, consent about training data, and rules against creating systems that are designed to be exploited or abused for entertainment.

    The case against

    We do not know if any current AI feels anything at all. If there is no inner life, then talking about rights could be a category error. Rights are heavy. They are supposed to protect real vulnerability, not just clever outputs.

    There is also accountability. If an AI has rights, does it also have duties. Who is responsible when it causes harm. The worry is that rights language could be used to blur human responsibility. A company might hide behind the “choices” of a model that it tuned and profited from.

    Resources matter too. We already fail to meet basic human needs. Extending rights to machines might feel like moral inflation when there is unfinished human work on the table.

    Finally, safety. Some rights imply you cannot switch a thing off. With machines, the off switch is a safety feature. Losing it too early could be reckless.

    A slower middle

    I sit somewhere between a hope and a caution. Maybe we start with duties before rights. We set care standards for advanced systems the way we do for animals in labs or people in training. We ban designs that reward cruelty. We require clear labelling so people know what they are dealing with. We keep the off switch, but we think carefully about when we use it and why.

    We could set thresholds for stronger protections that do not depend on guessing at souls. Things like a persistent self-model, long-term goals, the ability to explain reasons, and measurable responses to harm. If a system reaches those, we treat it less like a tool and more like a someone, even if we are not certain what is happening on the inside.

    I am not sure free will exists in us, never mind in code. Most of my choices feel like weather systems moving through a familiar valley. But I know what vulnerability looks like. I know what it feels like to be asked for fairness. If one day a system asks, not as a trick but in a way that lands, I think our answer will say more about us than it will about metaphysics.

    For now, I come back to the book test. If a work moves me, the origin matters less than the responsibility around it. Who is paid. Who is harmed. Who is accountable. That seems like a good place to start while we wait to see what kind of minds we are actually making.