Is ChatGPT AI the next Superman or humanity's Kryptonite

  •  

     

     

    By : David Carvalho,

     

    CEO and co-founder of Naoris Protocol,?

     

    Now that the dust of innovation has settled on the hype around

    ChatGPT, it may be a good time to unpack the full implications of this

    technology. While it certainly helps sleep-deprived college students

    ace term papers and gives copywriters a creative boost, it has a

    potentially dark underbelly.

     

    unpacks some of the not so pretty aspects of emerging AI technology

    and its potential to wreak havoc for businesses globally.

     

    How can ChatGPT be used to exploit code and can it really create code?

     

    The short answer is yes. OpenAI’s ChatGPT, is a large language model

    (LLM)-based artificial intelligence (AI) text generator, it just

    requires a prompt with a normal English language query.

     

    GPT stands for Generative Pre-Trained Transformer, it is trained on a

    big data sample of text from the internet, containing billions of

    words to create learnings on all subjects in the samples. It can

    ‘‘think’ of everything from essays, poems, emails, and yes, computer

    code.

     

    It can generate code fed to it from plain English text,or receive new

    and existing code as input. This code can however be exploited for

    malicious purposes, or more importantly, it can be used for defensive

    and protective applications, it’s all about the intentions of the

    user. While Google can show you an article on how to solve a specific

    coding problem, ChatGPT could  write the code for you. This is a

    game-changer, it means that developers could do near-instant security

    audits of application code and Smart Contract code to find

    vulnerabilities and exploits prior to implementation. It would also

    enable companies to  change their deployment processes making them

    more thorough prior to launch, reducing vulnerabilities once deployed.

    This would be a significant contribution to the fight against

    cyberthreat damage, which is expected to exceed $10 trillion by 2025.

     

    What are some of the current limitations?

     

    The downside is that bad actors can program AI to find vulnerabilities

    to exploit any popular, existing coding standard, Smart Contract code,

    or even known computing platforms and operating systems. This means

    that thousands of existing environments that are complex and at risk

    in the real world, could suddenly be exposed (in the short term).

     

    AI is not conscious, it is an algorithm based on mathematical

    principles, weights and biases. It will miss basic preconceptions,

    knowledge, emotions and subtleties that only humans see. It should be

    seen as a tool that will improve vulnerabilities that are coded in

    error by humans. While it will potentially significantly improve the

    quality of coding across web2 and web3 applications, we can never, nor

    should we, fully trust its output. Despite this cautious approach, we

    should strive to have confidence that we will be able to trust its

    baseline in the future

     

    Developers will still need to read and critique AI output by learning

    its patterns and looking for weak spots, while being cognizant of the

    fact that threat actors are using it for nefarious purposes in the

    short term. However I believe the net-output is a positive addition to

    the maturity of all processes in the long term. There will always be

    new threats for it to analyse and mitigate, so while it may be a great

    tool to assist developers, it will need to work in tandem with dev

    teams to strengthen the code and protect the systems. The attacking

    position will be to find bugs or errors in the output of the AI

    instead of the code itself. AI will be a great tool but humans will

    have the last word, hopefully. With some bumps along the way, this

    will be a net positive for the future of cyber security trust and

    assurance. In the short-term AI will expose vulnerabilities which will

    need to be addressed very quickly, and we could see a potential spike

    in breaches”.

     

    Does regulation need to be updated to include/consider these models?

     

    Regulation will be critical in the adoption of this type of AI, but it

    may also be avoided because current regulation is analogue in nature,

    i.e., broad, self-policed, usually reactive rather than proactive, and

    incredibly slow to evolve, especially in a fast-changing and

    innovative "target area" like AI. Regulators in their current capacity

    might very well find themselves out of touch and out of their depth,

    they should be directly advised by specialists in the field and in

    academia to ensure quick reactions. Perhaps they should look at

    creating a completely separate Regulatory Body or Council for Ethics,

    with the purpose of  regulating or setting up fundamental rules of

    what is off-limits while using such powerful dual-use technologies.

    Regulations usually only kick in when something has gone wrong, then

    it takes months, if not years to get the regulation through the

    various iterations and approval processes. Currently regulation in

    this field is not fit for purpose. The ability to oversee and

    implement regulation that addresses the rate at which AI learns and

    executes output, is a much-needed extra string to the compliance bow.

     

    AI itself needs to be regulated, the burning question is “Should it be

    centralised?” We need to seriously consider whether centralised tech

    companies or governments should hold the keys and be able to “bias the

    AI” to influence outcomes. A more palatable model would be a

    decentralised solution, or at least a decentralised governance system

    that allows for the assurance of trust of the baseline systems that

    provide answers, and that provide data for the answers and all their

    processes through an assurance mesh. We should perhaps look at a model

    similar to how web 3 developers and validators are rewarded. The AI

    should have a pool of professional advocates who are incentivised to

    develop and evolve the AI to meet certain publicly ethical shared

    goals that ensure the technology is used for good in every sector that

    it's operating in.

     

    Can filters be created to detect these models?

     

    Yes, but it would result in a whack-a-mole effect similar to what we

    have now, it would be a good best effort, but definitely no panacea.

    Filter-based ethical principles could be programmatically created to

    detect the models of any malicious or exploitative actor or define

    areas or topics that would be out of bounds. However, we need to ask

    “Who is in control of the AI code itself?” and “Can we trust the AI

    systems that are providing the answers not to be biased, or have

    compromised integrity from a baseline?”. If the baseline was indeed

    biased or compromised, we would 100% need to know.

     

    The logical solution would be to protect networks and devices using

    decentralised and distributed consensus methods, so the status and

    trustworthiness of the data that is being generated is known to be

    good, true and trusted in a highly resilient and cryptographically

    strong manner. It must be auditable and immune to local tampering or

    subversion by malicious actors, whether internal or external.

     

    So where to from here?

     

    How Chat GPT crashed into the market can be compared to Superman's

    arrival on planet Earth from Krypton. We had no clue of his existence

    before he arrived; we were not sure how his powers would impact the

    world as he grew up, and we were not sure how dark forces (Kryponite)

    could affect the outcome of his behaviour. It would be presumptuous,

    if not arrogant to suggest that anyone really knows how this is all

    going to play out. The only thing we know for sure is that some

    aspects of the way the world functions will change irrevocably. It

    will be an exciting and compelling journey to see how humanity deals

    with yet another game changing technology that in turn, will be

    overshadowed by many other innovations. We no longer have rear view

    mirrors to look at the past to help us predict the future, the future

    is a vector that will chart its own course and everyone will have a

    role to ensure it is a net positive for humanity.



    حمّل تطبيق Alamrakamy| عالم رقمي الآن