Automation, AI, Transhumanism

Posted by DrEdwardHudgins 6 years, 11 months ago to Technology
43 comments | Share | Flag

Here’s a Podcast in which I cover the ground from driverless vehicles to the economics of automation to AI to transhumanism. Hope you enjoy it!


All Comments


Previous comments...   You are currently on page 2.
  • Posted by $ AJAshinoff 6 years, 11 months ago in reply to this comment.
    Well that makes absolutely no sense. Even so, was any attempt to avoid the woman made? No. As mentioned before AI should be tracking and plotting courses for contacts well before they intersect. Even if the braking was turned off (makes absolutely no sense) the AI still should have seen a walking person with ample enough time to avoid collision. To AI normal visual detriments do not apply.
    Reply | Permalink  
  • Posted by DrZarkov99 6 years, 11 months ago in reply to this comment.
    It was a human failure. The automated braking had been turned off, and the "safety" driver was too busy with her smartphone to do what she was there for. If the users had allowed the machine to do what it was designed to do, the victim would have had a chance of survival.
    Reply | Permalink  
  • Posted by $ AJAshinoff 6 years, 11 months ago in reply to this comment.
    She could have been plastered and it would not be acceptable. What the point of having AI if its not to extend mans senses to make things safer? Intoxicated or not it should have at least hit the brakes the moment it detected her action. The video shows it should have detected her and prepared by at least slowing out of caution.
    https://www.youtube.com/watch?v=XTXd5...
    Reply | Permalink  
  • Posted by DrZarkov99 6 years, 11 months ago
    The challenge in creating an artificial intelligence that mirrors the way the human brain operates is to allow rapid autonomous reprogramming of the logic stream. We have only recently discovered that our brains have agent neurons that dynamically reroute neuronal pathways to enable modification of our reaction to a changing environment. What that says is that we have a lot to learn about how WE think before we can actually create a peer android.

    At one time, we thought of the human brain as a single processor system, but when we measured the speed of the electrochemical message transmission at only about 250 mph, we quickly realized that the single processor was wrong. Taking just one function, that of vision processing, it became obvious that the vision center in the brain couldn't interpret raw data fast enough. That led to the discovery that eyesight is the result of serial processing as the information flows down the optic nerve, finishing in the vision center. Even then, the process takes a number of milliseconds, which means we would be seeing images "in the past." That led to the discovery that the vision process is technically prescient, extrapolating information received to predict what the current picture is.

    An AI system doesn't have our limitations nor does it need the incredible "workarounds" we have, like muscle memory, to bypass thinking in order to react. In that sense, an AI being will not emulate us, but will be a unique creation, less complex than we have to be in order to function. That will have an effect on how it chooses to interact with us.

    Should we attempt to instill emotion into AI beings? Pure logic can seem heartless and dangerous in certain circumstances. We have the choice of trying to make AI beings make the same moral choices we would, or have them defer to a human when it determines an emotional/moral choice is to be made.

    I just jumped into this to illustrate that we may be attempting to cross a bridge too far, racing into developing artificial intelligence before we really understand the complexities involved.
    Reply | Permalink  
  • Posted by jconne 6 years, 11 months ago
    Defining our terms - we need to correct a common misunderstanding reflected in our naming of AI. It's MUCH more accurate to see it as Augmented Intelligence, much as eyeglasses augment vision, as well as telescopes and microscopes, etc. This includes electron microscopes, night vision technology and astronomical visualization gear in all electromagnetic frequencies. And then there's ultrasound and sonar using the sound spectrum.

    It's all about automating information processing, including sensing, and translating information into a human consumable form.

    None of this is Intelligence, which is a distinctly human capability. To the extent we know things, we benefit by automating and speeding up that capture and processing. To the extent we don't, we need to experience the consequences of that aspect of the world or the consequences of our actions. That's where true intelligence come into the picture.

    If we don't appreciate what's uniquely human, we cant give it the respect it deserves. This becomes an important, practical philosophical issue.
    Reply | Permalink  
  • Posted by DrZarkov99 6 years, 11 months ago in reply to this comment.
    The media isn't telling the whole story about the Arizona incident. The woman had a significant drug level in her system, which probably led her to make the very poor decision to cross the road nowhere near a crosswalk. The car's sensors did detect her, but only about one second before she was hit. It was very likely that she would have still been hit, even if the car's emergency braking had not been disabled. Darwinism in action.
    Reply | Permalink  
  • Posted by $ AJAshinoff 6 years, 11 months ago in reply to this comment.
    Agreed. I do think their use will be inevitable in time. Still, this was all over local news here, it, the AI, should have detected and tracked that woman, determining her course, long before it struck her. Most bothersome was that it never slowed even when she was in in line of sight. The driver, confident in the computer, wasn't paying attention. A woman is dead,
    In the Navy our ships tracked dozens of contacts from miles away, ensuring ample time to adjust course. While 25-50 miles of discovery and tracking in any direction isn't necessary, surely 300 yards of detection and tracking should be easily doable.
    Reply | Permalink  
  • Posted by 6 years, 11 months ago in reply to this comment.
    y the way, I think I mentioned in the podcast the problems in AZ. But I suspect it's really just a matter of time and perfecting the technology that will have driverless vehicles on the road as a regular thing!
    Reply | Permalink  
  • Posted by $ Olduglycarl 6 years, 11 months ago in reply to this comment.
    It's been shown that the neurons are compartmentalized. A new one must be engaged for even slightly different conditions. It's either A or B; no combination or anything inbetween., Proving, in my view, that the brain itself can only hold compartmentalized information.

    Unicameral, left and right brain synchronization seems to be the key that leads to the mind which doesn't seem to be "in" your head.

    I shy away from spirits and souls thus far because they are unseen and unmeasurable concepts, whereas the Mind field can be measured...one day soon we will see the minds interaction with the quantum field and the brain decoding that interaction.
    Reply | Permalink  
  • Posted by $ Olduglycarl 6 years, 11 months ago in reply to this comment.
    I just hope these cars are not forced upon everyone and that they have safety features that one can take control when they fail...and they will fail if not electronically hardened.

    The best AI can achieve is an imitation of conscious awareness...kind of like a super ego but I can't see it obtaining a independent identity recognizable within the quantum field.
    Reply | Permalink  
  • Posted by $ WilliamShipley 6 years, 11 months ago in reply to this comment.
    I was addressing the question of the limits of AI, not the techniques. My view is that if we consider human intelligence a mechanistic process, no matter how complex the mechanism, it will be eventually duplicated and probably improved upon.

    Certainly there is a need for highly integrated processing. There does seem to be the use of relatively complex sub-units such as neurons which fire based on detecting specific patterns.

    It does seem like the brain deals with data that's summarized at the sensory level rather than a fire-hose of stimulation.
    Reply | Permalink  
  • Posted by 6 years, 11 months ago in reply to this comment.
    The point made above by olduglycarl about compartmentalization is important here. A self-conscious mind or one with volition (a difficult concept, I know) must be integrated, not just isolated sub-units. Pinker in his discussion of mind speaks of modules that are, ultimately, integrated. AI researchers appreciate this issue.
    Reply | Permalink  
  • Posted by $ WilliamShipley 6 years, 11 months ago in reply to this comment.
    The discussion on whether AI can duplicate the function of the human mind, including becoming self-aware really depends on whether you have a mechanistic view of the mind or a spiritual one. If our minds are entirely made up of the physical components of neurons etc. and do not have a mystical component such as a "soul" then it is inevitable that we will be able to build hardware that duplicates that function. It is very complex, though.

    I do think that the complexity needed to implement human intelligence is sufficiently high that we won't quickly zoom past human level, that the 'smarter' machine won't instantly design a superhuman intelligence, it will creep up slowly.
    Reply | Permalink  
  • Posted by $ AJAshinoff 6 years, 11 months ago
    First I must say this tech and discussion is seriously fascinating to me, the stuff of science fiction (emerging reality).

    Skynet
    Christine
    Maximum Overdrive

    Caution dictates that when you construct something based solely on logic, reason and experience and grant it peer or superior status over yourself that you run the risk of growing obsolete, less than useful OR something's thermal battery.

    Very interesting broadcast.

    Side note: In Tempe Az testing was suspended because the Car did not see a woman crossing in the middle of the road...the vehicle had a driver who wasn't paying attention and it plainly did not see her. Tech should see and track everything around it regardless of trees, fog, buildings, smoke and other vehicles yet it clearly didn't and never even slowed down.
    Reply | Permalink  
  • Posted by 6 years, 11 months ago in reply to this comment.
    A wider discussion, for sure. As I say in the podcast, there are many questions concerning when an AI achieves the sort of cognition--integrated, self-aware--that is found in humans. But I see no fundamental reason why such consciousness cannot be manifest in something other than a human brain. After all, our brains developed over millions of years of evolution through natural selection and genetic mutations advantageous to survival. Would such a mind be based on some sort of nanotech-biohacked substance? Will it come about in 100 years? Sooner, with exponential tech advances? I don't know. But I don't see a fundamental reason why not.

    And on self-driving cars, they've been developed with remarkable speed and are quite good, better than many human drivers though not perfect. But I don't doubt they'll be pretty standard on our roads in the not-too-distant future.

    Cheers!
    Reply | Permalink  
  • Posted by $ Olduglycarl 6 years, 11 months ago in reply to this comment.
    Thanks for posting. It's a discussion we must have. But a discussion that needs a much wider, more integrated and sober view.

    I don't think so IMHO. AI, artificial intelligence, as I define intelligence as: Compartmentalized information. Meaning brain only, no integration, no wisdom; which I observe at this time is the best they will ever do.

    Self Driving cars for some people would be a blessing but has failed time after time...the technological issues are many.

    All of these technologies are Not hardened against EMP's, natural or otherwise, and don't forget about hacking...But to bank them with Human lives is a travesty.

    The makers of Compartmentalized machinery are themselves...Compartmentalized at this point in time; not to mention, transforming humans into what will effectually be a non human, much like what we see coming out of the leftest academia and governments right now,...just another brain in a body only...

    In the wider view, I don't think John Galt would be so impressed.
    Reply | Permalink  
  • Posted by 6 years, 11 months ago
    And this points to a future that even an innovator like Galt might have found hard to imagine but would welcome!
    Reply | Permalink  

  • Comment hidden. Undo