Some argue that bots should be entitled to ingest any content they see, because people can.

  • hoshikarakitaridia@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    1 year ago

    Well what an interesting question.

    Let’s look at the definitions in Wikipedia:

    Sentience is the ability to experience feelings and sensations.

    Experience refers to conscious events in general […].

    Feelings are subjective self-contained phenomenal experiences.

    Alright, let’s do a thought experiment under the assumptions that:

    • experience refers to the ability to retain information and apply it in some regard
    • phenomenal experiences can be described by a combination of sensoric data in some fashion
    • performance is not relevant, as for the theoretical possibility, we only need to assume that with infinite time and infinite resources the simulation of sentience through AI needs to be possible

    AI works by telling it what information goes in and what goes out, and it therefore infers the same for new patterns of information and it adjusts to “how wrong it was” to approximate the correction. Every feeling in our body is either chemical or physical, so it can be measured / simulated through data input for simplicity sake.

    Let’s also say for our experiment that the appropriate output it is to describe the feeling.

    Now I think, knowing this, and knowing how good different AIs can already comment on, summarize or do any other transformative task on bigger texts that exposes them to interpretation of data, that it should be able to “express” what it feels. Let’s also conclude that based on the fact that everything needed to simulate feeling or sensation it can be described using different inputs of data points.

    This brings me to the logical second conclusion that there’s nothing scientifically speaking of sentience that we wouldn’t be able to simulate already (in light of our assumptions).

    Bonus: while my little experiment is only designed for theoretical possibility and we’d need some proper statistical calculations to know if this is practical in a realistic timeframe already and with a limited amount of resources, there’s nothing saying it can’t. I guess we have to wait for someone to try it to be sure.

      • tomi000@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        1 year ago

        Interesting, please tell me how ‘parroting back a convincing puree of the model it was trained on’ is in any way different from what humans are doing.

        • hoshikarakitaridia@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          And that is the point.

          It sounds stupidly simple, but AIs in itself was the idea to do the learning and solving problems more like a human would. By learning how to solve similar problems, and transfer the knowledge to a new problem.

          Technically there’s an argument that our brain is nothing more than an AI with some special features (chemicals for feelings, reflexes, etc). But it’s good to remind ourselves we are nothing inherently special. Although all of us are free to feel special of course.

          • RickRussell_CA@kbin.socialOP
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            But we make the laws, and have the privilege of making them pro-human. It may be important in the larger philosophical sense to meditate on the difference between AIs and human intelligence, but in the immediate term we have the problem that some people want AIs to be able to freely ingest and repeat what humans spent a lot of time collecting and authoring in copyrighted books. Often, without even paying for a copy of the book that was used to train the AI.

            As humans, we can write the law to be pro-human and facilitate human creativity.