Leo Fernevak on Nostr: Our own critical ability will always be the pivotal point in our use of any tool. If ...
Our own critical ability will always be the pivotal point in our use of any tool.
If we are to develop AI that pretends to be thinking then such a program will be unpredictable and have the capacity to change its conclusions. It will need a reasoning ability that can reject non-rational positions. I haven't seen signs of that at all.
Having a program accept 2+2 = 5 in a conversation is not evidence of thinking, it is rather evidence of a lack of argumentational capacity. Programmers have to simplify human thinking processes in order to emulate them. This in turn is a problem with ripple effects.
AI development will therefore attract programmers that believe that human thinking is a simple process. It's like a filtering mechanism where those that underestimate human capacities are the same people that are most likely to spend time developing AI.
Many ethical programmers will likely reason along the lines of:
Either I am not competent to develop high quality AI and it would be a waste of my time, or, if I am competent enough then I would risk causing harm. In either case it doesn't seem like a good use of my time.
It seems hard to overcome this filtering process.
If we are to develop AI that pretends to be thinking then such a program will be unpredictable and have the capacity to change its conclusions. It will need a reasoning ability that can reject non-rational positions. I haven't seen signs of that at all.
Having a program accept 2+2 = 5 in a conversation is not evidence of thinking, it is rather evidence of a lack of argumentational capacity. Programmers have to simplify human thinking processes in order to emulate them. This in turn is a problem with ripple effects.
AI development will therefore attract programmers that believe that human thinking is a simple process. It's like a filtering mechanism where those that underestimate human capacities are the same people that are most likely to spend time developing AI.
Many ethical programmers will likely reason along the lines of:
Either I am not competent to develop high quality AI and it would be a waste of my time, or, if I am competent enough then I would risk causing harm. In either case it doesn't seem like a good use of my time.
It seems hard to overcome this filtering process.