By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Sécurité Helvétique News | AmyrisSécurité Helvétique News | AmyrisSécurité Helvétique News | Amyris
  • Home
  • Compliance
    Compliance
    Show More
    Top News
    AML & KYC: Addressing Key Challenges for 2023 and Beyond
    20 March 2023
    News Roundup: Confidence in AML Lags, but So Do Budgets
    28 April 2024
    Tax Nexus, Reciprocity & More: Navigating Multistate Payroll Tax Withholding Compliance
    6 November 2024
    Latest News
    US Finalizes CMMC Rule: Cybersecurity Verification Now Determines Contract Eligibility for Defense Contractors
    13 December 2025
    Top 10 Risk & Compliance Trends for 2026
    7 December 2025
    How 2025 Redefined Telemarketing Compliance
    1 December 2025
    Advice for the AI Boom: Use the Tools, Not Too Much, Stay in Charge
    25 November 2025
  • Cyber Security
    Cyber Security
    Show More
    Top News
    SolarWinds Patches 11 Critical Flaws in Access Rights Manager Software
    19 July 2024
    CISA Warns of Exploitable Vulnerabilities in Popular BIND 9 DNS Software
    25 July 2024
    Meta Settles for $1.4 Billion with Texas Over Illegal Biometric Data Collection
    31 July 2024
    Latest News
    North Korean Hackers Target Developers with Malicious npm Packages
    30 August 2024
    Russian Hackers Exploit Safari and Chrome Flaws in High-Profile Cyberattack
    29 August 2024
    Vietnamese Human Rights Group Targeted in Multi-Year Cyberattack by APT32
    29 August 2024
    2.5 Million Reward Offered For Cyber Criminal Linked To Notorious Angler Exploit Kit
    29 August 2024
  • Technology
    Technology
    Show More
    Top News
    Silent Night? CNET Survey Reveals 4 in 10 US Adults Struggle to Get Quality Sleep During the Holiday Season
    11 December 2024
    The Verge’s 2024 in review
    20 December 2024
    Best Body Pillows – CNET
    30 December 2024
    Latest News
    Why XSS still matters: MSRC’s perspective on a 25-year-old threat  | MSRC Blog
    9 September 2025
    Microsoft Bug Bounty Program Year in Review: $13.8M in Rewards | MSRC Blog
    28 August 2025
    Microsoft Bounty Program Year in Review: $16.6M in Rewards  | MSRC Blog
    27 August 2025
    postMessaged and Compromised | MSRC Blog
    26 August 2025
  • Businness
    Businness
    Show More
    Top News
    UK salad shortages to last ‘for weeks’, retailers warn
    21 February 2023
    Europe and Asia stocks fall further after Wall Street sell-off
    22 February 2023
    US stocks edge up despite rate rise worries
    23 February 2023
    Latest News
    Blue Owl Technology Finance stock initiated with Buy rating by B.Riley
    16 December 2025
    Client Challenge
    15 December 2025
    At least 2 killed and 8 injured hurt in shooting at Brown University with suspect still at large
    14 December 2025
    Thailand vows to keep fighting Cambodia, despite Trump's ceasefire claim
    13 December 2025
  • ÉmissionN
    Émission
    Cyber Security Podcasts
    Show More
    Top News
    Ransomware Minute. Telecom Namibia, Minneapolis Parks & Rec Board. Scott Schober, WCYB Digital Radio
    18 December 2024
    How To Report A Cybercrime In South Carolina or Anywhere in the U.S.
    27 December 2024
    Stream episode Cybercrime Wire For Jan. 11-12, 2024. Weekend Update. WCYB Digital Radio. by Cybercrime Magazine podcast
    11 January 2025
    Latest News
    Stream episode Cybercrime Magazine Update: Cybercrime In India. Sheer Volume Overwhelming Police Forces. by Cybercrime Magazine podcast
    3 March 2025
    Autonomous SOC. Why It’s A Breakthrough For The Mid-Market. Subo Guha, SVP of Product, Stellar Cyber
    2 March 2025
    Cyber Safety. Protecting Families From Smart Toy Risks. Scott Schober, Author, "Hacked Again."
    2 March 2025
    Cybercrime News For Feb. 25, 2025. Hackers Steal $49M from Infini Crypto Fintech. WCYB Digital Radio
    2 March 2025
Search
Cyber Security
  • Application Security
  • Darknet
  • Data Protection
  • network vulnerability
  • Pentesting
Compliance
  • LPD
  • RGPD
  • Finance
  • Medical
Technology
  • AI
  • MICROSOFT
  • VERACODE
  • CHECKMARKX
  • WITHSECURE
  • Amyris
  • Contact
  • Disclaimer
  • Privacy Policy
  • About us
© 2023 Sécurité Helvétique NEWS par Amyris Sarl. Tous droits réservés
Reading: Safer AI: Four Questions Shaping Our Digital Future
Share
Sign In
Notification Show More
Font ResizerAa
Sécurité Helvétique News | AmyrisSécurité Helvétique News | Amyris
Font ResizerAa
  • Home
  • Compliance
  • Cyber Security
  • Technology
  • Business
Search
  • Home
    • Compliance
    • Cyber Security
    • Technology
    • Businness
  • Legal Docs
    • Contact us
    • Disclaimer
    • Privacy Policy
    • About us
Have an existing account? Sign In
Follow US
  • Amyris
  • Contact
  • Disclaimer
  • Privacy Policy
  • About us
© 2023 Sécurité Helvétique par Amyris Sarl.
Sécurité Helvétique News | Amyris > Blog > network vulnerability > Safer AI: Four Questions Shaping Our Digital Future
network vulnerability

Safer AI: Four Questions Shaping Our Digital Future

webmaster
Last updated: 2023/12/26 at 8:14 AM
webmaster
Share
9 Min Read
SHARE


Contents
1. Can we use AI without human oversight? 2. Can AI creators fix algorithmic bias after the fact? 3. Who is responsible for an AI’s actions? 4. How do we balance AI’s benefits with its security/privacy concerns? 5. Individuals must take action to ensure appropriate use of their information  Introducing McAfee+

Depending on the day’s most popular headlines, AI is either a panacea or the ultimate harbinger of doom. We could solve the world’s problems if we just asked the algorithm how. Or it’s going to take your job and become too smart for its own good. The truth, as per usual, lies somewhere in between. AI will likely have plenty of positive impacts that do not change the world while also offering its fair share of negativity that isn’t society-threatening. To identify the happy medium requires answering some interesting questions about the appropriate use of AI.  

1. Can we use AI without human oversight? 

The full answer to this question could probably fill volumes, but we won’t go that far. Instead, we can focus on a use case that is becoming increasingly popular and democratized: generative AI assistants. By now, you’ve likely used ChatGPT or Bard or one of the dozens of platforms available to anyone with a computer. But can you prompt these algorithms and be wholly satisfied with what they spit out? 

The short answer is, “no.” These chatbots are quite capable of hallucinations, instances where the AI will make up answers. The answers it provides come from the algorithm’s set of training data but may not actually be traceable back to real-life knowledge. Take the recent story of a lawyer who presented a brief in a courtroom. It turns out, he used ChatGPT to write the entire brief, wherein the AI cited fake cases to support the brief.1 

When it comes to AI, human oversight will likely always be necessary. Whether the model is analyzing weather patterns to predict rainfall or evaluating a business model, it can still make mistakes or even provide answers that do not make logical sense. Appropriate use of AI, especially with tools like ChatGPT and its ilk, requires a human fact checker. 

2. Can AI creators fix algorithmic bias after the fact? 

Again, this is a question more complicated than this space allows. But, we can attempt to examine a narrower application of the question. Consider that many AI algorithms in the real-world have been found to exhibit discriminatory behavior. For example, one AI had a much larger error rate depending on the sex or race of subjects. Another incorrectly classified inmate risk, leading to disproportionate rates of recidivism.2 

So, can those who write these algorithms fix these concerns once the model is live? Yes, engineers can always revisit their code and attempt to adjust after publishing their models. However, the process of evaluating and auditing can be an ongoing endeavor. What AI creators can do instead is to focus on reflecting values in their models’ infancy.  

Algorithms’ results are only as strong as the data on which they were trained. If a model is trained on a population of data disproportionate to the population it’s trying to evaluate, those inherent biases will show up once the model is live. However robust a model is, it will still lack the basic human understanding of what is right vs. wrong. And it likely cannot know if a user is leveraging it with nefarious intent in mind.  

While creators can certainly make changes after building their models, the best course of action is to focus on engraining the values the AI should exhibit from day one.  

3. Who is responsible for an AI’s actions? 

A few years ago, an autonomous vehicle struck and killed a pedestrian.3 The question that became the incident’s focus was, “who was responsible for the accident?” Was it Uber, whose car it was? The operator of the car? In this case, the operator of the vehicle, who sat in the car, was charged with endangerment.  

But what if the car had been empty and entirely autonomous? What if an autonomous car didn’t recognize a jaywalking pedestrian because the traffic signal was the right color? As AI finds its way into more and more public use cases, the question of responsibility looms large.  

Some jurisdictions, such as the EU, are moving forward with legislation governing AI culpability. The rule will strive to establish different “obligations for providers and users depending on the level of risk from” AI.  

It’s in everyone’s best interest to be as careful as possible when using AI. The operator in the autonomous car might have paid more attention to the road, for example. People sharing content on social media can do more due diligence to ensure what they’re sharing isn’t a deepfake or other form of AI-generated content.  

4. How do we balance AI’s benefits with its security/privacy concerns? 

This may just be the most pressing question of all those related to appropriate use of AI. Any algorithm needs vast quantities of training data to develop. In cases where the model will evaluate real-life people for anti-fraud measures, for example, it will likely need to be trained on real-world information. How do organizations ensure the data they use isn’t at risk of being stolen? How do individuals know what information they’re sharing and what purposes it’s being used for?  

This large question is clearly a collage of smaller, more specific questions that all attempt to get to the heart of the matter. The biggest challenge related to these questions for individuals is whether they can trust the organizations ostensibly using their data for good or in a secure fashion.  

5. Individuals must take action to ensure appropriate use of their information 

For individuals concerned about whether their information is being used for AI training or otherwise at risk, there are some steps they can take. The first is to always make a cookies selection when browsing online. Now that the GDPA and CCPA are in effect, just about every company doing business in the U.S. or EU must place a warning sign on their website that it collects browsing information. Checking those preferences is a good way to keep companies from using information when you don’t want them to. 

The second is to leverage third-party tools like McAfee+, which provides services like VPNs, privacy and identity protection as part of a comprehensive security platform. With full identity-theft protection, you’ll have an added layer of security on top of cookies choices and other good browsing habits you’ve developed. Don’t just hope that your data will be used appropriately — safeguard it, today. 

Introducing McAfee+

Identity theft protection and privacy for your digital life





Source link

You Might Also Like

Cloud Security Class Labs

Think That Party Invite Is Real? Fake E-Vite Scams Are the New Phishing Trap

Think That Party Invite Is Real? Fake E-Vite Scams Are the New Phishing Trap

Think That Party Invite Is Real? Fake E-Vite Scams Are the New Phishing Trap

Think That Party Invite Is Real? Fake E-Vite Scams Are the New Phishing Trap

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.

By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Whatsapp Whatsapp LinkedIn Reddit Telegram Email Copy Link Print
Share
Previous Article How to Correct Form I-9 & Other I-9 FAQs
Next Article Carbanak Banking Malware Resurfaces with New Ransomware Tactics
Leave a comment Leave a comment

Comments (0) Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

235.3k Followers Like
69.1k Followers Follow
11.6k Followers Pin
56.4k Followers Follow
136k Subscribers Subscribe
4.4k Followers Follow
- Advertisement -
Ad imageAd image

Latest News

Evaluating AI’s ability to perform scientific research tasks
Application Security ARTIFICIAL INTELLIGENCE (AI) CHECKMARKX Innovation VERACODE 16 December 2025
Fair Credit Reporting Act Updates
CHECKMARKX 16 December 2025
From Prompt Injection To Account Takeover · Embrace The Red
Pentesting 16 December 2025
Switzerland to tighten rules on military service for dual nationals
SWITZERLAND 16 December 2025
//

We influence 20 million users and is the number one business and technology news network on the planet

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

Loading
Sécurité Helvétique News | AmyrisSécurité Helvétique News | Amyris
Follow US
© 2023 Sécurité Helvétique NEWS par Amyris Sarl. Tous droits réservés
Amyris news letter
Join Us!

Subscribe to our newsletter and never miss our latest news, podcasts etc..

Loading
Zero spam, Unsubscribe at any time.
login Amyris SH
Welcome Back!

Sign in to your account

Lost your password?