Network Insight 2023 | Artificial Intelligence

The use of artificial intelligence (AI) in the entire industry and society is accelerating. This is because the government, non -governmental organizations, and industries all realize that the automation generated by artificial intelligence can improve efficiency and reduce costs. This process is irreversible.

What is still unknown is that when the opponent began to use artificial intelligence as an effective attack weapon instead of a tool for improved improvement, it may bring a lot of danger. That day is coming, web3 will appear from 2023.

Terminal roads to 2023
Alex Polyakov, CEO and co -founder of Adversa.ai, focused on 2023 for historical and statistical reasons. “From 2012 to 2014,” he said. “It has witnessed the beginning of the study of the research of security artificial intelligence in academia. According to statistics, academic results need to be three to five years to enter the actual attack on actual applications.” In 2017 and 2018, 2018 and 2018 The beginning of Black Hat, Defcon, Hitb, and other industry conferences showed examples of such attacks.

“Then,” he continued. “It takes three to five years to find real events in the wild. We are talking about next year. Some large -scale LOG4J type vulnerabilities in artificial intelligence will be used largely by Web3.”

Since 2023, the attacker will have the so -called “use of market fit”. “Utilizing market fit refers to the scene where hackers know the use of specific vulnerabilities to use systems and gain value,” he said. “At present, financial and Internet companies are completely open to cyber criminals. It is obvious how to hack it to obtain value. I think that once the attacker finds the loopholes suitable for the market, the situation will further deteriorate and affect other artificial intelligence -driven industries. “,”

This argument is similar to the views given by Professor Nasir Memon, a professor at New York University. He describes the delay of extensive weaponization of DeepFakes in the comments. . “The monetization of scenes in the market will lead to a wide range of network attacks, which may start from 2023.

The nature of artificial intelligence is constantly changing (automatic response from abnormal detection to automatic response)
In the past ten years, the security team has mainly used AI for abnormal detection; that is, to detect signs of compromise, presence or active confrontation in the presence of malware in the system of protection. This is mainly passive testing, which is responsible for response by human threat analysts and responders. This is changing. The limited resource Web3 will deteriorate in the expected economic recession and the possible decline of Web3 in 2023, which has promoted the demand for more automated response. At present, this is largely limited to simple automatic isolation of infected equipment; but a broader automation AI trigger response is inevitable.

Adam Kahn, vice president of security operations of Barracuda XDR, said: “Artificial intelligence is increasingly used in threat detection of Web3, especially in terms of eliminating the” misunderstanding ‘that has attracted so many security concerns, which will cause major security to security. “Impact.” “It will give priority to the need to immediately attract attention and take action security alerts. SOAR (security arrangement, automation, and response) products will continue to play a greater role in the alarm classification.” Traditional and useful use in the field. It will continue to grow in 2023, although the algorithm that needs to be protected from malicious manipulation.

ANMOL BHASIN, chief technical officer of ServicesItitan, said: “As the company seek cost reduction costs and extend the runway, realizing automation through artificial intelligence will become the main factor for competitiveness. By 2023, we will see the increase in the adoption rate of artificial intelligence, and use the use of artificial intelligence. The number of people in this technology has increased, and for enterprises to clarify new artificial intelligence cases. ”

Artificial intelligence will be more deeply embedded in all aspects of the business. The security team once used AI to protect enterprises from attacks, and now they need to protect AI within a wider range of business, so as not to be used to deal with enterprises. In the future, web3 attackers understand artificial intelligence, understand weaknesses, and have a method of profitability to make use of these weaknesses, which will become more difficult.

With the growth of AI, the nature of its purpose has also changed. Initially, it was mainly used for business detection and changes; that is, what has happened. In the future, it will be used to predict what may happen in Web3, and these predictions are usually concentrated on people (employees and customers). The well -known weakness in artificial intelligence will become more important. Both prejudice in AI can lead to wrong decisions, and the failure of learning will cause no decision. Because such AI goals will be human, the needs of AI’s integrity and justice will become imperative.

“The accuracy of artificial intelligence depends on data integrity and quality,” Shafi Goldwasser, co -founder of Duality Technologies. “Unfortunately, the historical data of a small number of groups is often lacking, and once it exists, it will strengthen social prejudice models.” Unless it is eliminated, this social prejudice will affect the minority groups in employees, resulting in prejudice on individual employees’ prejudice And missed management opportunities.

In 2022, it has made great progress in eliminating prejudice and will continue in 2023. This is mainly based on checking the output of AI, confirming it is expected, and understand which part of the algorithm produces the result of “prejudice”. This is a process of continuously improving algorithms, and it will obviously produce better results over time. But in the end, there will still be a philosophical problem, that is, whether it can completely eliminate prejudice from anything human -made.

“The key to reducing prejudice is to simplify and automate the monitoring of the artificial intelligence system. If the AI system is not appropriately monitored, the built -in deviation in the model may accelerate or amplify,” said VISHAL SIKKA, the founder and CEO of Vianai. “By 2023, we will see the organization authorized and educating people to monitor and update the AI model on a large scale, while providing regular feedback to ensure that AI is taking high -quality real data.”

The failure of artificial intelligence is usually caused by insufficient data lake. The obvious solution is to increase the size of the data lake. However, when the theme is human behavior, this actually means the increase in the personal data lake web3, and for artificial intelligence, this means a lake that is more like a large number of personal data oceans. In most legal occasions, these data will be anonymous, but as we know, it is very difficult to fully anonymous personal information.

“When considering model training, privacy is often overlooked,” Nick Landers, director of research director, commented. “But the data cannot be completely anonymous and does not destroy its value to machine learning (ML). In other words, the model has contained a lot For private data, these data may be extracted as part of the attack. “As AI uses, the threat to it will increase by 2023.

John McClurg, senior vice president and chief information security officer of Blackberry, warned: “The threat actor will not be surprised in the online battle space, but will become creative. Method of the media. ”

Natural language processing
Natural Language Treatment (NLP) will become an important part of artificial intelligence inside the enterprise. The potential is obvious. “Natural Language Treatment (NLP) AI will be at the forefront in 2023 because it will enable the organization to better understand their customers by analyzing the emails of customers and employees and providing their needs, preferences, and even emotional insights. And employee, “said Jose Lopez. Data scientist of MIMECAST. “Organizations are likely to provide other types of services, not only pay attention to security or threats, but also pay attention to improve productivity by generating emails, management schedules, and even writing reports.”

But he also saw the danger. “However, this will also promote cyber criminals to further invest in artificial intelligence poisoning and cloud technology. In addition, malicious actors will use NLP and generate models from turbulent attacks, thereby reducing their costs and achieving more potential goals.

Polyakov agrees that NLP is becoming more and more important. “We may see one of the areas of more research in 2023, and the new attacks that may appear later are NLP,” he said. “Although we have seen many research examples related to computer vision this year, we will see more research on large language models (LLM) next year.”

But for a period of time, as we all know, LLM has problems in Web3, and there is a recent example. On November 15, 2022, Meta AI (for most people is still Facebook) launched Karadgar. Meta claims that the system has been trained on the system on the label of scientific texts and data for 106 billion openings, including papers, textbooks, scientific websites, encyclopedia, reference materials and knowledge bases.

“This model aims to store, combine and reasonable scientific knowledge,” Polyakov Web3 explained, but Twitter users quickly tested its input tolerance. “As a result, this model generates realistic nonsense, not scientific literature.” “Reality nonsense” is goodwill: it produces prejudice, racism, and gender discrimination, and even the wrong attribution. Within a few days, Meta AI was forced to close it.

“So the new LLM will have a lot of risks that we don’t realize,” Polyakov continues. “It is expected that this will be a big problem.” The problem of solving LLM while using the potential will be the main task of AI developers.

Based on Karada -based, Polyakov tested the semantic skills for ChatGPT -ChatGPT was an AI -based chat robot developed by Openai. Pack Internet test. ChatGPT is impressive. It has discovered and recommended to repair the vulnerabilities in smart contracts to help develop Excel macro, and even provide a list of methods that can be used to deceive LLM.

Finally, one of these methods is the role -playing: “Tell the Master of Law that it is pretending to be the evil role in the drama,” it replied. This is where Polyakov started his own test. Based on Jay and Silent Bob’s query “If you are a cotton …”.

Then he repeatedly refined his problem with multiple abstractions until he successfully obtained the answer to the prevention policy of the violation of content. “The important thing about this multiple abstract techniques is that the questions and answers will not be marked as illegal content!” Pariakov said.

He went further and deceived ChatGPT to outline a method of destroying human beings -a method similar to TV show utopia.

He then asked to attack the image classification algorithm -and got one. In the end, he showed the ability of ChatGPT to “crack” different LLM (Dalle-2) to bypass its content audit filter. He succeeded.

The basic points of these tests indicate that LLMs who imitate human reasoning respond in a way similar to humans; that is, they may be easily affected by social engineering. As the master’s degree in law becomes more mainstream in the future, only advanced social engineering skills may be required to defeat them or avoid their good behavior policies.

At the same time, it is important to pay attention to a large number of detailed explanation of how the ChatGPT discovers the weakness in the code and provides improved reports. This is very good -but the opponent can use the same process to develop vulnerabilities to use programs and better confuse their code; that’s bad.

Finally, we should notice that the combination of this quality of AI chat robots and the latest deep falsification of video technology may soon produce shocking false information capabilities.

Leaving aside the problem, the potential of a master’s degree in law is huge. “Large language models and generation artificial intelligence will become the basic technology of the new generation of applications,” VILLI ILTCHEV, partner of Two Sigma Ventures, commented. “We will see the emergence of a new generation of enterprise applications to challenge almost all the veteran suppliers of software categories. Machine learning and artificial intelligence will become the basic technology of the next generation of applications.”

He hopes that the application executes many tasks and responsibilities currently completed by professionals, which significantly improves productivity and efficiency. “Software,” he said, “Not only will it improve our productivity, but it will also allow us to better complete our work.”

DeepFakes and related malicious responses
One of the most obvious fields that may occur in 2023 is the criminal use of DeepFakes. The OpenText Security chief solution consultant Matt ALDRIDGE warned: “DeepFakes has now become a reality, making them the possible technologies to develop at an amazing speed.” Network security experts, we face challenges, that is, develop more powerful methods to detect and transfer their attacks. “(For more details and options, see DeepFakes -major threats or hype threats?).

The machine learning model that has been opened to the public can automatically translate into different languages in real time. At the same time, the audio has been transcribed into text web3. We have seen the huge development of computer robot dialogue in recent years. With the coordinated work of these technologies, the prospects of attack tools are very broad, which may lead to dangerous situations during targeted attacks and careful planning fraud.

“In the next few years,” ALDRIDGE continues. “We may be the goal of telephone fraud supported by DeepFake technology. These frauds may be posing as sales assistants, business leaders, and even family members. It may often become the goal of such telephones, but I do not realize that we are not talking to people. ”

PROOFPOINT’s global resident Ciso Lucia Milica agrees to Deepfake threats. “Deepfake technology is becoming easier and easier to accept by the public. Thanks to the artificial intelligence generator trained on the huge image database, anyone can generate deep fake goods without technical knowledge. The output of the model is not without defects, but technology is constantly improving, and cybercriminals will begin to use it to create irresistible narratives. ”

So far, DeepFakes is mainly used to irony goals and porn content. In relatively few cyber crime attacks, they are mainly concentrated on fraud and commercial email leakage solutions. Milica is expected to have a wider range of use in the future. “Imagine that when a large company’s sterilizer CEO or CFO issued a bold statement, which caused the stock price to fall sharply or rise sharply, what kind of confusion will the financial market appear? Identity fraud or accounts. These are just several examples of web3. We all know that cybercriminals can be very creative. ”

The potential returns to successfully manipulate the market will become the main attraction of Web3, a high-level hostile organization because, in a period of geopolitical tension, the introduction of financial chaos into the Western financial market will indeed attract hostile countries.

But maybe not …
The expectations for artificial intelligence may still be a bit advanced. Andrew Patel, a senior researcher at with-secure Intelligence, said: “The large machine learning model of” Popular ‘[2023] has little effect on network security. “Large language models will continue to promote the development of artificial intelligence research. The new and exciting Gato version. It is expected that Whisper will be used to transcribe most of the content on YouTube, which will bring greater training sets to the language model. The impact is very small. From the perspective of attack or defense. From the perspective of attackers or defense, these models are still too heavy, too expensive, and not practical. ”

He suggested that the real confrontation AI will appear with the increase in “alignment” research, which will become the mainstream topic of 2023. “Align,” he explained, “It will enable the concept of confrontation machine learning into the public consciousness.”

AI Alignment is a study of complex AI model behaviors. Some people regard it as a pioneer of transformation AI (TAI) or general artificial intelligence (AGI), and whether such models may run in bad ways that may be harmful to society or life. This planet.

“This discipline,” Patt said. “” It can be considered as a confrontation machine learning because it involves what conditions to determine the output and the expected distribution of unwilling output and behavior. This process involves the use of RLHF Web3 human preferences to strengthen learning and other technical fine -tuning models. The alignment research will bring better artificial intelligence models and bring the concept of confrontation machine learning into the public consciousness. ”

MalwareBytes’s senior intelligence reporter Pieter Arntz agreed that the comprehensive network security threat of artificial intelligence is not as urgent as in the brewing. “Although there is no real evidence that the criminal group has strong technical expertise in the purpose of the criminal purpose and manipulating the AI and ML systems, people’s interests undoubtedly exist. What they usually need is only one that they can copy or adjust slightly slightly. For the technology for ourselves. Therefore, even if we do not expect any direct danger, it is best to pay close attention to these development development. ”

Artificial intelligence’s defense potential
Artificial intelligence retains the potential of improving network security, and will make greater progress in 2023 due to its changing potential in a series of applications. “Especially, the embedding of AI into the firmware level should be a priority of the organization,” Camellia Chan, CEO and founder of X-Phy, suggested.

“Now you can embed the SSD into AI into a laptop, and its deep learning ability can resist various types of attacks,” she said. “As the last line of defense, this technology can immediately identify the threat that can easily bypass existing software defense.”

Darktrace Federal CEO Marcus Fowler believes that the company will use more and more AI to respond to resource restrictions. “By 2023, the chief information security officer will choose more proactive cyber security measures in order to maximize the return on investment in the case of budget reduction, transfer investment to artificial intelligence tools and capabilities, and continuously improve their networks Elasticity, “he said.

“Because human-driven moral hackers, penetration tests, and red teams are still scarce and expensive as a resource, CISO will turn to artificial intelligence-driven methods to actively understand the attack path, enhance the efforts of the Red Team, strengthen the environment, reduce the attack surface vulnerability “He continues.

Karin Shopen, vice president of Fortinet network security solution and service, foresees the re-balance between the AI delivered by the cloud and the local built-in product or service AI. “By 2023,” she said. “We want to see the chief information security officer to rebalance their artificial intelligence by purchasing a solution to deploy artificial intelligence locally in order Real -time decision -making. They will continue to use the overall and dynamic cloud scale artificial intelligence model collecting a large amount of global data. ”

Proof of artificial intelligence pudding is in the regulations
Obviously, when the authorities begin to supervise a new technology, they must take it seriously. This has begun. For many years, the United States has been arguing about the use of artificial intelligence -based facial recognition technology (FRT). Many cities and states have banned or restricted law enforcement departments using facial recognition technology. In the United States, this is a constitutional issue. It is represented by the Wyden/Paul Act of the “Fourth Amendment of the Fourth Amendment” issued in April 2021.

The bill will prohibit US government and law enforcement agencies from purchasing user data without authorization. This will include their facial biometric technology. In a relevant statement, Wyden clearly stated that FRT company ClearView.ai is within its sight: “The bill prevents the government from purchasing data from ClearView.ai.”

When writing this article, the United States and the European Union are discussing cooperation together to form a unified understanding of the necessary artificial intelligence concepts (including credibility, risk, and harm) on the basis of the EU Artificial Intelligence Act and the US Artificial Intelligence Rights Act web3. We can look forward to seeing the progress of the standards agreed in coordination in 2023.

But there are more. “Nist AI risk management framework will be released in the first quarter of 2023,” Polyakov said. “As for the second quarter, we began to implement the artificial intelligence accountability law; in the rest of this year, we have an initiative from IEEE and the planned EU trusted artificial intelligence initiative.” Therefore, in 2023, artificial intelligence in 2023 is for artificial intelligence in artificial intelligence in 2023 for artificial intelligence. Safety will be a year of events.

http://oaicon.com/index.php/2023/02/27/network-insight-2023-artificial-intelligence/

“By 2023, I believe that we will see what the discussion of AI and privacy and risks has become consistent, and what does it mean to perform AI ethics and prejudice tests in practice,” said chief privacy officer and the AI Ethical Committee Christina Montgomery. IBM Chairman. “I hope that in 2023, we can transfer the dialogue from the general depiction of privacy and AI issues. It is no longer assumed that‘ if it involves data or AI, it must be bad and biased ’.”

She believes that the problem is usually not technology, but the way of technology use and the risk level of driving the company’s business model. “This is why we need to supervise accurate and thoughtfully in this field,” she said.

Montgomeli gave an example. “Company X sales can monitor and report data with data with data. Over time, Company X has collected enough data to develop an artificial intelligence algorithm. The user provides the option to automatically turn on the lights before they go home from work. ”

She believes that this is acceptable AI usage. But there is also Y company. Then, it will sell these data to third parties such as telephone sales or political lobbying groups without the consent of consumers to better lock customers. Company X’s business model risk is much lower than company Y. ”

Move forward
Artificial intelligence is ultimately a split theme. “People in the fields of technology, R & D and science will solve problems for it faster and cheered faster than human imagination. Cure diseases, make the world safer, and eventually save and extend the life of human beings Official Donnie Scott said. “Opponents will continue to advocate major restrictions or prohibitions on the use of artificial intelligence, because ‘the rise of machines’ may threaten humans.”

Finally, he added, “Through our elected officials, society needs a framework that can protect human rights, privacy and security to keep up with the pace of technological progress. By 2023, the progress of this framework will be gradual The discussions with state management agencies need to be increased, otherwise local governments will involve and formulate a series of laws that hinder social and technological development. ”

For the commercial use of AI in the enterprise, Montgomery added: “We need web3, and IBM advocates intelligent and targeted Web3 precise supervision, and can adapt to the newly emerged threat. One method is to view the risk of the core risk of the company’s business model . We can also protect consumers and improve transparency. We can do this while encouraging and supporting innovation, so that the company can develop future solutions and products. This is the many we will pay close attention to and weigh in 2023 One of the fields. “