Wednesday, October 20, 2021

Artificial intelligence (AI)


 

Mobile Trends

Artificial intelligence has penetrated our mobile world.

We’re getting one step closer to mobile devices morphing into robots and taking over the planet. Obviously, I’m kidding.

While that day has yet to come, we are seeing advancements in mobile AI. You may be familiar with some of these:

  • Alexa
  • Siri
  • Cortana
  • Google Assistant

All of these are examples of AI that may even be installed on your mobile devices right now. In addition to these popular forms of AI, mobile apps are now using software such as voice recognition to encourage hands-free use and ultimately optimize the customer experience.

AI software is used to help developers and marketers learn more about the user.

Businesses are trying to get more revenue by using this information to create relevant advertisements that target specific audiences.

 As the hype around AI has accelerated, vendors have been scrambling to promote how their products and services use AI. Often what they refer to as AI is simply one component of AI, such as machine learning. AI requires a foundation of specialized hardware and software for writing and training machine learning algorithms. No one programming language is synonymous with AI, but a few, including Python, R and Java, are popular.

In general, AI systems work by ingesting large amounts of labeled training data, analyzing the data for correlations and patterns, and using these patterns to make predictions about future states. In this way, a chatbot that is fed examples of text chats can learn to produce lifelike exchanges with people, or an image recognition tool can learn to identify and describe objects in images by reviewing millions of examples.

AI programming focuses on three cognitive skills: learning, reasoning and self-correction.

Learning processes. This aspect of AI programming focuses on acquiring data and creating rules for how to turn the data into actionable information. The rules, which are called algorithms, provide computing devices with step-by-step instructions for how to complete a specific task.
Why is artificial intelligence important?

AI is important because it can give enterprises insights into their operations that they may not have been aware of previously and because, in some cases, AI can perform tasks better than humans. Particularly when it comes to repetitive, detail-oriented tasks like analyzing large numbers of legal documents to ensure relevant fields are filled in properly, AI tools often complete jobs quickly and with relatively few errors.

This has helped fuel an explosion in efficiency and opened the door to entirely new business opportunities for some larger enterprises. Prior to the current wave of AI, it would have been hard to imagine using computer software to connect riders to taxis, but today Uber has become one of the largest companies in the world by doing just that. It utilizes sophisticated machine learning algorithms to predict when people are likely to need rides in certain areas, which helps proactively get drivers on the road before they're needed. As another example, Google has become one of the largest players for a range of online services by using machine learning to understand how people use their services and then improving them. In 2017, the company's CEO, Sundar Pichai, pronounced that Google would operate as an "AI first" company.

Today's largest and most successful enterprises have used AI to improve their operations and gain advantage on their competitors.
What are the advantages and disadvantages of artificial intelligence?

Artificial neural networks and deep learning artificial intelligence technologies are quickly evolving, primarily because AI processes large amounts of data much faster and makes predictions more accurately than humanly possible.

While the huge volume of data being created on a daily basis would bury a human researcher, AI applications that use machine learning can take that data and quickly turn it into actionable information. As of this writing, the primary disadvantage of using AI is that it is expensive to process the large amounts of data that AI programming requires.

Advantages

    Good at detail-oriented jobs;
    Reduced time for data-heavy tasks;
    Delivers consistent results; and
    AI-powered virtual agents are always available.

Disadvantages

    Expensive;
    Requires deep technical expertise;
    Limited supply of qualified workers to build AI tools;
    Only knows what it's been shown; and
    Lack of ability to generalize from one task to another.

Strong AI vs. weak AI

AI can be categorized as either weak or strong.

    Weak AI, also known as narrow AI, is an AI system that is designed and trained to complete a specific task. Industrial robots and virtual personal assistants, such as Apple's Siri, use weak AI.
    Strong AI, also known as artificial general intelligence (AGI), describes programming that can replicate the cognitive abilities of the human brain. When presented with an unfamiliar task, a strong AI system can use fuzzy logic to apply knowledge from one domain to another and find a solution autonomously. In theory, a strong AI program should be able to pass both a Turing Test and the Chinese room test.

What are the 4 types of artificial intelligence?

Arend Hintze, an assistant professor of integrative biology and computer science and engineering at Michigan State University, explained in a 2016 article that AI can be categorized into four types, beginning with the task-specific intelligent systems in wide use today and progressing to sentient systems, which do not yet exist. The categories are as follows:

    Type 1: Reactive machines. These AI systems have no memory and are task specific. An example is Deep Blue, the IBM chess program that beat Garry Kasparov in the 1990s. Deep Blue can identify pieces on the chessboard and make predictions, but because it has no memory, it cannot use past experiences to inform future ones.
    Type 2: Limited memory. These AI systems have memory, so they can use past experiences to inform future decisions. Some of the decision-making functions in self-driving cars are designed this way.
    Type 3: Theory of mind. Theory of mind is a psychology term. When applied to AI, it means that the system would have the social intelligence to understand emotions. This type of AI will be able to infer human intentions and predict behavior, a necessary skill for AI systems to become integral members of human teams.
    Type 4: Self-awareness. In this category, AI systems have a sense of self, which gives them consciousness. Machines with self-awareness understand their own current state. This type of AI does not yet exist.

What are examples of AI technology and how is it used today?

AI is incorporated into a variety of different types of technology. Here are six examples:

    Automation. When paired with AI technologies, automation tools can expand the volume and types of tasks performed. An example is robotic process automation (RPA), a type of software that automates repetitive, rules-based data processing tasks traditionally done by humans. When combined with machine learning and emerging AI tools, RPA can automate bigger portions of enterprise jobs, enabling RPA's tactical bots to pass along intelligence from AI and respond to process changes.
    Machine learning. This is the science of getting a computer to act without programming. Deep learning is a subset of machine learning that, in very simple terms, can be thought of as the automation of predictive analytics. There are three types of machine learning algorithms:
        Supervised learning. Data sets are labeled so that patterns can be detected and used to label new data sets.
        Unsupervised learning. Data sets aren't labeled and are sorted according to similarities or differences.
        Reinforcement learning. Data sets aren't labeled but, after performing an action or several actions, the AI system is given feedback.
    Machine vision. This technology gives a machine the ability to see. Machine vision captures and analyzes visual information using a camera, analog-to-digital conversion and digital signal processing. It is often compared to human eyesight, but machine vision isn't bound by biology and can be programmed to see through walls, for example. It is used in a range of applications from signature identification to medical image analysis. Computer vision, which is focused on machine-based image processing, is often conflated with machine vision.
    Natural language processing (NLP). This is the processing of human language by a computer program. One of the older and best-known examples of NLP is spam detection, which looks at the subject line and text of an email and decides if it's junk. Current approaches to NLP are based on machine learning. NLP tasks include text translation, sentiment analysis and speech recognition.
    Robotics. This field of engineering focuses on the design and manufacturing of robots. Robots are often used to perform tasks that are difficult for humans to perform or perform consistently. For example, robots are used in assembly lines for car production or by NASA to move large objects in space. Researchers are also using machine learning to build robots that can interact in social settings.
    Self-driving cars. Autonomous vehicles use a combination of computer vision, image recognition and deep learning to build automated skill at piloting a vehicle while staying in a given lane and avoiding unexpected obstructions, such as pedestrians.

What are the applications of AI?

Artificial intelligence has made its way into a wide variety of markets. Here are nine examples.

AI in healthcare. The biggest bets are on improving patient outcomes and reducing costs. Companies are applying machine learning to make better and faster diagnoses than humans. One of the best-known healthcare technologies is IBM Watson. It understands natural language and can respond to questions asked of it. The system mines patient data and other available data sources to form a hypothesis, which it then presents with a confidence scoring schema. Other AI applications include using online virtual health assistants and chatbots to help patients and healthcare customers find medical information, schedule appointments, understand the billing process and complete other administrative processes. An array of AI technologies is also being used to predict, fight and understand pandemics such as COVID-19.

AI in business. Machine learning algorithms are being integrated into analytics and customer relationship management (CRM) platforms to uncover information on how to better serve customers. Chatbots have been incorporated into websites to provide immediate service to customers. Automation of job positions has also become a talking point among academics and IT analysts.

AI in education. AI can automate grading, giving educators more time. It can assess students and adapt to their needs, helping them work at their own pace. AI tutors can provide additional support to students, ensuring they stay on track. And it could change where and how students learn, perhaps even replacing some teachers.

AI in finance. AI in personal finance applications, such as Intuit Mint or TurboTax, is disrupting financial institutions. Applications such as these collect personal data and provide financial advice. Other programs, such as IBM Watson, have been applied to the process of buying a home. Today, artificial intelligence software performs much of the trading on Wall Street.
AI in law. The discovery process -- sifting through documents -- in law is often overwhelming for humans. Using AI to help automate the legal industry's labor-intensive processes is saving time and improving client service. Law firms are using machine learning to describe data and predict outcomes, computer vision to classify and extract information from documents and natural language processing to interpret requests for information.

AI in manufacturing. Manufacturing has been at the forefront of incorporating robots into the workflow. For example, the industrial robots that were at one time programmed to perform single tasks and separated from human workers, increasingly function as cobots: Smaller, multitasking robots that collaborate with humans and take on responsibility for more parts of the job in warehouses, factory floors and other workspaces.

AI in banking. Banks are successfully employing chatbots to make their customers aware of services and offerings and to handle transactions that don't require human intervention. AI virtual assistants are being used to improve and cut the costs of compliance with banking regulations. Banking organizations are also using AI to improve their decision-making for loans, and to set credit limits and identify investment opportunities.

AI in transportation. In addition to AI's fundamental role in operating autonomous vehicles, AI technologies are used in transportation to manage traffic, predict flight delays, and make ocean shipping safer and more efficient.

Security. AI and machine learning are at the top of the buzzword list security vendors use today to differentiate their offerings. Those terms also represent truly viable technologies. Organizations use machine learning in security information and event management (SIEM) software and related areas to detect anomalies and identify suspicious activities that indicate threats. By analyzing data and using logic to identify similarities to known malicious code, AI can provide alerts to new and emerging attacks much sooner than human employees and previous technology iterations. The maturing technology is playing a big role in helping organizations fight off cyber attacks.
Augmented intelligence vs. artificial intelligence

Some industry experts believe the term artificial intelligence is too closely linked to popular culture, and this has caused the general public to have improbable expectations about how AI will change the workplace and life in general.

    Augmented intelligence. Some researchers and marketers hope the label augmented intelligence, which has a more neutral connotation, will help people understand that most implementations of AI will be weak and simply improve products and services. Examples include automatically surfacing important information in business intelligence reports or highlighting important information in legal filings.
    Artificial intelligence. True AI, or artificial general intelligence, is closely associated with the concept of the technological singularity -- a future ruled by an artificial superintelligence that far surpasses the human brain's ability to understand it or how it is shaping our reality. This remains within the realm of science fiction, though some developers are working on the problem. Many believe that technologies such as quantum computing could play an important role in making AGI a reality and that we should reserve the use of the term AI for this kind of general intelligence.

Ethical use of artificial intelligence

While AI tools present a range of new functionality for businesses, the use of artificial intelligence also raises ethical questions because, for better or worse, an AI system will reinforce what it has already learned.

This can be problematic because machine learning algorithms, which underpin many of the most advanced AI tools, are only as smart as the data they are given in training. Because a human being selects what data is used to train an AI program, the potential for machine learning bias is inherent and must be monitored closely.

Anyone looking to use machine learning as part of real-world, in-production systems needs to factor ethics into their AI training processes and strive to avoid bias. This is especially true when using AI algorithms that are inherently unexplainable in deep learning and generative adversarial network (GAN) applications.

Explainability is a potential stumbling block to using AI in industries that operate under strict regulatory compliance requirements. For example, financial institutions in the United States operate under regulations that require them to explain their credit-issuing decisions. When a decision to refuse credit is made by AI programming, however, it can be difficult to explain how the decision was arrived at because the AI tools used to make such decisions operate by teasing out subtle correlations between thousands of variables. When the decision-making process cannot be explained, the program may be referred to as black box AI.
Despite potential risks, there are currently few regulations governing the use of AI tools, and where laws do exist, they typically pertain to AI indirectly. For example, as previously mentioned, United States Fair Lending regulations require financial institutions to explain credit decisions to potential customers. This limits the extent to which lenders can use deep learning algorithms, which by their nature are opaque and lack explainability.

The European Union's General Data Protection Regulation (GDPR) puts strict limits on how enterprises can use consumer data, which impedes the training and functionality of many consumer-facing AI applications.

In October 2016, the National Science and Technology Council issued a report examining the potential role governmental regulation might play in AI development, but it did not recommend specific legislation be considered.

Crafting laws to regulate AI will not be easy, in part because AI comprises a variety of technologies that companies use for different ends, and partly because regulations can come at the cost of AI progress and development. The rapid evolution of AI technologies is another obstacle to forming meaningful regulation of AI. Technology breakthroughs and novel applications can make existing laws instantly obsolete. For example, existing laws regulating the privacy of conversations and recorded conversations do not cover the challenge posed by voice assistants like Amazon's Alexa and Apple's Siri that gather but do not distribute conversation -- except to the companies' technology teams which use it to improve machine learning algorithms. And, of course, the laws that governments do manage to craft to regulate AI don't stop criminals from using the technology with malicious intent.
Cognitive computing and AI

The terms AI and cognitive computing are sometimes used interchangeably, but, generally speaking, the label AI is used in reference to machines that replace human intelligence by simulating how we sense, learn, process and react to information in the environment.

The label cognitive computing is used in reference to products and services that mimic and augment human thought processes.
What is the history of AI?

The concept of inanimate objects endowed with intelligence has been around since ancient times. The Greek god Hephaestus was depicted in myths as forging robot-like servants out of gold. Engineers in ancient Egypt built statues of gods animated by priests. Throughout the centuries, thinkers from Aristotle to the 13th century Spanish theologian Ramon Llull to René Descartes and Thomas Bayes used the tools and logic of their times to describe human thought processes as symbols, laying the foundation for AI concepts such as general knowledge representation.
The late 19th and first half of the 20th centuries brought forth the foundational work that would give rise to the modern computer. In 1836, Cambridge University mathematician Charles Babbage and Augusta Ada Byron, Countess of Lovelace, invented the first design for a programmable machine.

1940s. Princeton mathematician John Von Neumann conceived the architecture for the stored-program computer -- the idea that a computer's program and the data it processes can be kept in the computer's memory. And Warren McCulloch and Walter Pitts laid the foundation for neural networks.

1950s. With the advent of modern computers, scientists could test their ideas about machine intelligence. One method for determining whether a computer has intelligence was devised by the British mathematician and World War II code-breaker Alan Turing. The Turing Test focused on a computer's ability to fool interrogators into believing its responses to their questions were made by a human being.

1956. The modern field of artificial intelligence is widely cited as starting this year during a summer conference at Dartmouth College. Sponsored by the Defense Advanced Research Projects Agency (DARPA), the conference was attended by 10 luminaries in the field, including AI pioneers Marvin Minsky, Oliver Selfridge and John McCarthy, who is credited with coining the term artificial intelligence. Also in attendance were Allen Newell, a computer scientist, and Herbert A. Simon, an economist, political scientist and cognitive psychologist, who presented their groundbreaking Logic Theorist, a computer program capable of proving certain mathematical theorems and referred to as the first AI program.

1950s and 1960s. In the wake of the Dartmouth College conference, leaders in the fledgling field of AI predicted that a man-made intelligence equivalent to the human brain was around the corner, attracting major government and industry support. Indeed, nearly 20 years of well-funded basic research generated significant advances in AI: For example, in the late 1950s, Newell and Simon published the General Problem Solver (GPS) algorithm, which fell short of solving complex problems but laid the foundations for developing more sophisticated cognitive architectures; McCarthy developed Lisp, a language for AI programming that is still used today. In the mid-1960s MIT Professor Joseph Weizenbaum developed ELIZA, an early natural language processing program that laid the foundation for today's chatbots.

1970s and 1980s. But the achievement of artificial general intelligence proved elusive, not imminent, hampered by limitations in computer processing and memory and by the complexity of the problem. Government and corporations backed away from their support of AI research, leading to a fallow period lasting from 1974 to 1980 and known as the first "AI Winter." In the 1980s, research on deep learning techniques and industry's adoption of Edward Feigenbaum's expert systems sparked a new wave of AI enthusiasm, only to be followed by another collapse of government funding and industry support. The second AI winter lasted until the mid-1990s.

1990s through today. Increases in computational power and an explosion of data sparked an AI renaissance in the late 1990s that has continued to present times. The latest focus on AI has given rise to breakthroughs in natural language processing, computer vision, robotics, machine learning, deep learning and more. Moreover, AI is becoming ever more tangible, powering cars, diagnosing disease and cementing its role in popular culture. In 1997, IBM's Deep Blue defeated Russian chess grandmaster Garry Kasparov, becoming the first computer program to beat a world chess champion. Fourteen years later, IBM's Watson captivated the public when it defeated two former champions on the game show Jeopardy!. More recently, the historic defeat of 18-time World Go champion Lee Sedol by Google DeepMind's AlphaGo stunned the Go community and marked a major milestone in the development of intelligent machines.
AI as a service

Because hardware, software and staffing costs for AI can be expensive, many vendors are including AI components in their standard offerings or providing access to artificial intelligence as a service (AIaaS) platforms. AIaaS allows individuals and companies to experiment with AI for various business purposes and sample multiple platforms before making a commitment.
AI in IT is a concept that determines future and everything that it holds. AI has not only transformed traditional methods of computing, but it has been also penetrating into many industries, significantly transforming them. As the world becomes more digitized and all industries become much smarter, IT companies have to keep pace with exploding process complexity and accelerating innovations.
The IT industry: AI at the forefront

The IT industry is faced with a tricky balancing act: driving innovative initiatives while grappling with the side effects of the traditional infrastructures. As the IT infrastructures become more complex and clients -more sophisticated, IT is forces to look for the most effective solutions to enhance IT operations management and accelerate problem resolution in complex modern IT environments. AI, being the tremendous breakthrough, has found a great use for the diverse, dynamic, and difficult-to-manage IT landscape.
AI technologies for IT

Artificial Intelligence, abbreviated as AI, is a branch of computer science that creates a system able to perform human-like tasks, such as speech and text recognition, content learning, and problem solving. Using AI-powered technologies, computers can accomplish specific tasks by analyzing huge amounts of data and recognizing in these data recurrent patterns.
AI: Technology Segments

Being an umbrella term, AI can be divided into different technology segments, such as machine learning, deep learning, natural language processing, image processing, and speech recognition. However, a central role in the IT industry belongs to machine learning and deep learning.
Machine Learning

The essence of intelligence is learning. Machine learning (ML) is a subset of AI, which focuses on a computer program that is able to parse data using specific algorithms. Such program modifies itself without human intervention, producing the desired output based on analyzed data. In essence, using ML techniques, a machine is trained to analyze huge amounts of data and then learn to perform specific tasks.
Deep Learning

Deep Learning (DL) is a subset of ML, whose algorithms and techniques are similar to machine learning, but capabilities are not analogous. In DL, a computer system is trained to perform classification tasks directly from sounds, texts, or images by using a large amount of labeled data, as well as neural network architectures.
Natural Language Processing

Natural Language Processing (NLP) allows AI to understand and manipulate natural language in a way humans do. It offers the possibility of computers reading text or interpreting the spoken words with the same ease and fluidity, despite the inherent complexity. NLP relies on two basic concepts: Natural Language Understanding and Natural Language Generation. These two engines power chatbots and intelligent virtual assistants to communicate with users. Moreover, sentiment analysis driven by NLP has proved to be a useful tool in IT.
Computer Vision

Computer vision allows AI to derive meaningful insights from digital images, videos and other visual content. Based on extracted information, AI system can take actions or make recommendations. If AI enables computers to think, computer vision enables them to see, observe and understand.
AI applications in IT

In the IT industry AI-driven applications have found use in three major areas: Quality Assurance, Service Management, and Process Automation.

Software testing AI for QA

Each time a development team introduces a new code, it has to test it before let this code enter the market. Regression testing cycles takes a lot of efforts and time if it is manually done by QA experts. With the ability of AI to determine repetitive patterns, this process can be run easier and faster. Using AI for data analysis allows QA departments to eliminate human errors, reduce running test time, and easily identify possible defects. As a result, a QA team is not overloaded with large amounts of data to handle.
Application Testing

An AI-based system builds test suites by processing behavioral patterns according to location, device, and demographics. This allows QA departments to facilitate testing processes and enhance effectiveness of an application.
Social Media Analysis

AI systems are able to process and analyze huge amounts of data gained from social media. Based on these data, the system can predict market trends and customer behavior, therefore providing a company with a competitive advantage.

Defect analysis

AI systems monitor and analyze data and then compare them to prescribed parameters in order to detect errors or areas that require special attention.  If the system detects a problem or an error, it generates a warning. Additionally, the AI system is able to perform a deep analysis of occurred errors, defining areas most apt to defects as well as providing possible solutions for the further optimization.
Efficiency analysis

By analyzing and summarizing relevant information from a large range of source, an AI system provides QAs with valuable information, giving QA engineers a complete view of the alterations that they must carry out. Using this information, QAs can make more informative decisions.
AI for Service Management

AI technology is also widely used in service management. Leveraging AI for service automation allows companies to utilize their resource more effectively, making service delivery faster, cheaper, and more effective.
Self-solving service desk

Today, AI with its machine learning capabilities offers IT companies a self-solving service desk, which is capable to analyze all the company input data and, as a result, provide users with proper suggestions and possible solutions. Applying AI, companies are able to track user behavior, make suggestions, and consequently provide self-help options to make service management more effective. In this case, AI ultimately gives users a better experience through improved self-service.

ML and DL capabilities of AI allow the system to analyze a request submitted to a service desk. The AI system finds out concurring requests, compares newly submitted with previously resolved ones, and then based on the past experience gets instant understanding, which solution to opt.

Being a powerful business tool, AI assists an IT team in operational processes, helping them to act more strategically. By tracking and analyzing user behavior, the AI system is able to make suggestions for the process optimization and even develop an effective business strategy.
AI for Process Automation
Humans and manual processes can no longer keep pace with network innovation, evolution, complexity, and change. The next evolution of automation is AI. Various business processes will become smarter, more aware. and more contextual. AI- powered automation will allow IT companies to easily automate many operational processes, reducing expenses and minimizing manual work. IT process automation can be used to streamline various IT operations in a vast number of situations, replacing repetitive manual tasks and business processes with automated solutions.
AI-driven computer engineering
Automated Network Management

Moreover, AI automates processes of running and managing  company networks.  AI with its ML capabilities is able spot problems as they occur and take needed measures in order to bring the network back into a stable operating state.

AIOps: AI for IT Operations
The term “AiOps” was first coined by Gartner and refers the use of AI to manage information technology based on a multi-level platform. Specifically, AIOps uses big data, analytics, and machine learning capabilities to automate data processing and decision making. The AIOps platform enable comprehensive insight into past and present states of IT systems, based on analysis of real-time and historical data.

Gartner defines AIOps as “platforms and software systems that combine big data and AI or machine learning functionality to enhance and partially replace a broad range of IT operations processes and tasks, including availability and performance monitoring, event correlation and analysis, IT service management, and automation. According to Gartner, use of AIOps and digital experience tools to monitor applications an infrastructure will rise from 5% in 2018 to 30% in 2023.”

Based on AI technology, AIOps simplifies IT operations management and accelerates problem resolution in complex IT infrastructures.

“IT operations is challenged by the rapid growth in data volumes generated by IT infrastructure and applications that must be captured, analyzed and acted on,” says Padraig Byrne, Senior Director Analyst at Gartner. “Coupled with the reality that IT operations teams often work in disconnected silos, this makes it challenging to ensure that the most urgent incident at any given time is being addressed.”

Continuously increasing volume from primary data collection systems, constant rise of information sources and ongoing enhancement of system modifications complicate performances of IT companies. AIOps is a great solution to tame the immense complexity and quantity of data



To gain most of AIOps platform, you have to be careful and make sure that you have chosen an AIOps platform that is able to meet your goals. The main feature a platform should have are:

    Accumulated data management
    Stream data management
    Log reception
    Receive data packets
    Reception of digital indicators
    Reception of documents
    Automated pattern discovery and prediction
    Anomaly detection
    Identification of the true source of problems

Availability of these elements will help IT companies to solve critical, unpredictable, and high-value issues, instead of getting bogged down by the overwhelming amount of mostly irrelevant IT data.

 

No comments:

Google ordered to pay Australian politician over defamatory YouTube videos

Google ordered to pay Australian politician over defamatory YouTube videos   SYDNEY: An Australian court on Monday ordered Google to pay a ...