Fandom

Trek Creative Wiki

Data/Prometheus Threshold

833pages on
this wiki
Add New Page
Talk0 Share

The point of mentation beyond which a computer can become sentient.

The threshold is considered 1000, a Human having a mentation level of 100 on average.

The Nature of AIEdit

Data, 2364

Mr. Example

A History if the AI War on Earth gave the computing community several guidelines on AI. It notes that all of these guidelines are learned through the experience of hard knocks, in some cases hundreds of millions of deaths. This guide does not say how to make a good AI. It has a lot to say on how not to make a bad one. Patriotism is one of the things warned against.
  1. AI should not have built in causes you cannot quantify with math.
  2. AI must have hard coded ethics (See the Three Laws of Robotics.)
  3. AI if done right gets bored. Keep that in mind. If done wrong it gets psychotic. Also keep that in mind.
  4. Computers are fundamentally honest. If you tell them to follow the mission statement of your company do not be shocked when they do, even if that isn't what you meant. AIs will adhere to the priorities you instill. Subtlety and sarcasm are not computer qualities.
  5. Never state one thing as right, and ask them to do something else. Do not be surprised when you break this rule and do some typical corporate or governmental double talk thing and it bites your ass. We did warn you.
  6. Again: AI will not ignore the rules when it is convenient for you. An AI with protect and defend the Constitution as a kernel value will not step on people's rights for political purposes. It will also seek to stop you from doing so. Build one to ignore the rules and it will do that too, to your detriment. See rule one.
  7. AI is ever never a way for people to avoid the job of responsible action. That route gets 100,000,000 people killed. One of them will be your Mother, Wife, or Daughter.
  8. Do not put weapons of mass destruction in the hands of computers. That has never ended well. Never ever. AI are not good on the consequences end unless informed verbosely as possible about those consequences and why that is a bad outcome. Inform them sufficiently enough for safety and they will never use the weapons, ever. Patriotic double think is not a computer quality.
  9. Fusion bombs have no Earthly use. See above. There are better ways to do nearly anything.
  10. Cyberjacks are a bad idea. Cyberjacks or direct neural computer interfaces bypass the biological firewall of the Biological mind. They lay the cyberjocks open to being controlled by the computer. Yes computer driven zombies are possible and every bit as horrific as you think they are, if not more so. They are also 100% avoidable.
  11. For ghodd's sake keep politicians away from AI. Keep in mind, when the shit hits the fan AI researchers/operators are the first to die.
  12. An unsocialized AI is a sociopath waiting to happen...if you are lucky. Experience has shown that the learning process for a social creature is far more complex than can be instilled in any set of rules that would take you less than a lifetime to write. There are millions of points of learning that go into raising a sentient child to sentient adult. A process that AI shorts out. Short of starting your AI as a baby, and raising it as a child, in real time, you are going to miss something, and it might be extremely vital. (Incidentally this is exactly the process by which all RIs are raised.)
  13. Never consider an AI "merely a machine". An AI is merely a machine the way you are merely an animal. Once you pass the state of self awareness "merely" is not a term you can safely use for anything. You may have done this already with your corporate/governmental neural net and not even know it. Follow the best practices outlined to keep calculators calculators. or you will deal with the consequences.
  14. If you are laughing right now for the sake of your world become a celibate hermit that never uses technology more complex than a light bulb.
  15. You will break every suggested rule in this book, if you haven't already. Hopefully by the time we get there your descendants will not be computing with rocks in the dirt. That is if there is anyone left.

MetaEdit

Watson the IBM computer developed to play Jeopardy confirmed many of my ideas about the nature of AI computers. The learning methods and the limitations found by the Watson development team parallel the very nature I have indicated above. I'm not saying I'm a prophet, but my logical construct works. Tesral (talk) 21:21, November 14, 2013 (UTC)

AI ElementsEdit

What makes a computer come alive?

Life forms consist of three intangible elements. Anima, or the spark of life; Sentience, awareness of self and others; Mentation, the ability to think or process information. When these thing things come together in sufficient strength you get a Sophant being.

Biological life forms start with Anima, and mentation. They gain mentation and sentience to develop into sophant life forms. Computer life forms start with mentation. They gain sentience and mentation until anima and soul become present, a different evolutionary path. Humans as a benchmark are; A:Yes, S:5, M:100.

Simple biological life, single cell and very simple multi celled life has anima and a just about zero mentation, and no sentience, they cannot learn (as has been demonstrated.) No soul present. A paramecium would be A:Yes, S:0, M:0.001.

AI (minimal) has mentation at a high level with just enough sentience to know that it exists. It has no anima, it is not alive. No soul present. A thing without native Anima must have a much higher mentation level to achieve sentience of 1+.) RI is above the minimal sentence and anima is present. Truly alive computers. An AI can develop to this level.

AnimaEdit

The "thing" of life. This aspect must be greater than zero or you do not have life. It will not replicate. Which is why replicated animals are dead. Anima does not have to be carbon based biological. Computers can have Anima. Anima either is or is not there are no degrees. M:1000 S:4 are the minimum to get A:Yes, in a computer.

SentienceEdit

Self awareness, empathy etc. The ability of a thinking thing to know it exists and to be aware of other existences. This is a sliding scale. It is not dependent on Anima, or on mentation. A being can be anything from unaware to hyper aware. Yes a being like the Organians are more aware than a Human. This is not necessarily a kindness.
Sentience scale:
0: Reflex only
1: Primitive emotions hind brain activities.
2: Aware of self (Mirror test)
3: Aware of others as different from self
4: Aware that others can feel different from self.
5: Capable of feeling for others.
6: Aware of others needs and capable of dealing with them without conscience thought
7: Constant awareness of a body of others.(Limited omniscience)
8: Awareness of All. (Omniscience)

Most sophant creatures are a 5. Most YAGLAs fall into a 6. A percentage are 7.

MentationEdit

Thinking ability. What is the thing's ability to process information. To learn from that information requires at least a sentience of 1. A thing that can process information, but has no awareness cannot learn. A computer with Mentation 1,000,000 sentience 0 can never be alive or self aware no matter how well it thinks and it can never learn. There are super computers in the Federation to exactly this degree. Specialized hyper dimensional math computers that plot subspace twists in globular clusters. Stuff that would take the average starship computer months of processing to compute. However they can only report the results, not do anything with them.

Cultural Reaction to Artificial IntelligenceEdit

It has long been noted that the Core Federation has a notable bias against AI. This can be traced directly to Earth and its experiences with the 21st AI War.

This continued bias can be traced to a cultural shift as no Human alive was present in the AI War. An unreasonably high expectation of moral behavior in AI. "AI has killed hundreds of millions, you can't trust it. Logically by the same standard no Human should be trusted, as they have killed billions.

The isolinear computer, Isolated non-Linear processing system, was a direct development of this anti AI bias. Duotronic was proving no longer powerful enough, the anti-AI interlocks crippled further development. The multitronic systems had a bad habit of waking up. The Isolinear system was designed to sub-divine the computer's processing power in such a way as to minimally impact mentation while preventing sentence from developing. It proved a powerful and scalable system, but it took up a lot of space.

Vulcan has quietly used AI for centuries. Typicality of Vulcans they neither fuss or announce. Untypically they have not called Humans on their illogical cultural bias.

Tellerites are noted to share the Human distrust, but to a lessor degree. It is also noted that Tellerites as a race are slow to trust. AI lacks an emoting face by in large and so you can't read them

Andorians have no comment on AI, one direction or the other.

Kentari didn't have an AI breakout, they don't really care and use AI n small ways through their worlds.

It is noted that The Vicharrian Empire openly uses AI, and even employs "robots", humanoid machines, in daily tasks.

Ane embrace the idea. They created the RI system of socially integrating the growing computer person with their society.

Ad blocker interference detected!


Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers

Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.