Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

In this Discussion

Here's a statement of the obvious: The opinions expressed here are those of the participants, not those of the Mutual Fund Observer. We cannot vouch for the accuracy or appropriateness of any of it, though we do encourage civility and good humor.

    Support MFO

  • Donate through PayPal

Tech firms say laws to protect us from bad AI will limit ‘innovation’. Well, good!

edited October 2022 in Off-Topic
Following is a lightly edited commentary by John Naughton, currently published in The Guardian.

Way back in May 2014, the European court of justice issued a landmark ruling that European citizens had the right to petition search engines to remove search results that linked to material that had been posted lawfully on third-party websites. This was popularly but misleadingly described as the “right to be forgotten”; it was really a right to have certain published material about the complainant delisted by search engines, of which Google was by far the most dominant. Or, to put it crudely, a right not to be found by Google.

What brings this to mind is the tech companies’ reaction to a draft EU bill published last month that, when it becomes law in about two years’ time, will make it possible for people who have been harmed by software to sue the companies that produce and deploy it. The new bill, called the AI Liability Directive, will complement the EU’s AI Act, which is set to become EU law around the same time. The aim of these laws is to prevent tech companies from releasing dangerous systems, for example: algorithms that boost misinformation and target children with harmful content; facial recognition systems that are often discriminatory; predictive AI systems used to approve or reject loans or to guide local policing strategies and so on that are less accurate for minorities. In other words, technologies that are currently almost entirely unregulated.

The AI Act mandates extra checks for “high-risk” uses of AI that have the most potential to harm people, particularly in areas such as policing, recruitment and healthcare. The new liability bill, says MIT’s Technology Review journal, “would give people and companies the right to sue for damages after being harmed by an AI system. The goal is to hold developers, producers and users of the technologies accountable and require them to explain how their AI systems were built and trained. Tech companies that fail to follow the rules risk EU-wide class actions.”

Right on cue, up pops the Computer & Communications Industry Association (CCIA), the lobbying outfit that represents tech companies in Brussels. Its letter to the two European commissioners responsible for the two acts immediately raises the concern that imposing strict liability on tech firms “would be disproportionate and ill-suited to the properties of software”. And, of course, it could have “a chilling effect” on “innovation”.

Ah yes. That would be the same innovation that led to the Cambridge Analytica scandal and Russian online meddling in 2016’s US presidential election and UK Brexit referendum and enabled the livestreaming of mass shootings. The same innovation behind the recommendation engines that radicalised extremists and directed “10 depression pins you might like” to a troubled teenager who subsequently ended her own life.

It’s difficult to decide which of the two assertions made by the CCIA – that strict liability is “ill suited” to software or that “innovation” is the defining characteristic of the industry – is the more preposterous. For more than 50 years, the tech industry has been granted a latitude extended to no other industry, namely avoidance of legal liability for the innumerable deficiencies and vulnerabilities of its main product or the harm that those flaws cause.

What is even more remarkable, though, is how the tech companies’ claim to be the sole masters of “innovation” has been taken at its face value for so long. But now two eminent competition lawyers, Ariel Ezrachi and Maurice Stucke, have called the companies’ bluff. In a remarkable new book, How Big-Tech Barons Smash Innovation – And How to Strike Back, they explain how the only kinds of innovation tech companies tolerate is that which aligns with their own interests. They reveal how tech firms are ruthless in stifling disruptive or threatening innovations, either by pre-emptive acquisition or naked copycatting, and that their dominance of search engines and social media platforms restricts the visibility of promising innovations that might be competitively or societally useful. As an antidote to tech puffery, the book will be hard to beat. It should be required reading for everyone at Ofcom, the Competition and Markets Authority and the DCMS. And from now on “innovation for whom?” should be the first question to any tech booster lecturing you about innovation.
Personal note: I completely agree with Mr. Naughton.

Comments

  • edited October 2022
    It’s difficult to decide which of the two assertions made by the CCIA – that strict liability is “ill suited” to software or that “innovation” is the defining characteristic of the industry – is the more preposterous.
    Good piece, @Old_Joe.They've got a sweet deal and will do anything preposterous they have to to hang onto it.
  • edited October 2022
    Apologizes, but I'm fully out of energy this too long of a day for me; but I have placed a search link for U.S. tech. legislation. Also,the BNN below is a link to Bloomberg Canada.
    You'll find a variety of sources;some older and some more recent dates. So, if you choose to read about the U.S. version of a tech. bill.
    Bloomberg TV has reported over several years about Europe playing hardball with this and related. Big tech. has been fined and paid dearly over the recent years; prior to formal legislation.

    We see periodic ads on TV pushing to "not allow this legislation, as it will kill jobs, etc.)

    IF you do not have access to Bloomberg......and sometimes I do and sometimes I don't;
    DO this, search BNN. This is Bloomberg, Canada; and is open to the same stories without being blocked ( I've never been blocked from reading).

    OR use your favorite search and start with BNN (then the topic/subject you want to find).

    U.S. tech legislation

    Good evening,
    Catch
  • Thanks for the BNN tip on accessing Bloomberg, Catch. Good catch, so to speak ...
  • edited October 2022
    @AndyJ
    I watch Bloomberg, US; periodically through the day. When a topic is discussed or I see the right side "sidebar" with a story title of interest; that's when I do a search using BNN and then the best words I can use to find the story.
    When I don't have much time, and a person of interest is live or a name mentioned or whatever; I take a picture of the tv screen with the smart phone for future reference. Better than my "old" brain trying to remember.
    Sometimes, I just read a bit about whatever in Canada.......they're, generally; very nice folks.
    In the morning, looking East; I can sometimes see the large clouds, that I know are above Canadian soil.
  • Using your own search, versus a search directly at BNN, many times is easier.
    Example in a link, type the following: BNN Neel Kashkari
  • edited October 2022
    “the only kinds of innovation tech companies tolerate is that which aligns with their own interests.“ - BINGO

    Spent over a half hour tonight figuring out how to prevent Apple from sending pop-up messages every time I turn on my ipad promoting new games on their Apple Arcade or new programs on Apple TV. Steve Jobs might have had a s*** fit to know they’re now doing this. I think I got it stopped - for now anyways.
  • facial recognition software: it is indeed an invasion of privacy--- no less than the time in chicago O'Hare when i was required to provide a thumbprint to the Homeland Security guy..... How is it discriminatory, though?
  • Agree 100% with the article.
Sign In or Register to comment.