Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

28 Words About Optimization, Every AI-Savvy Leader Must Know
Latest   Machine Learning

28 Words About Optimization, Every AI-Savvy Leader Must Know

Last Updated on July 20, 2023 by Editorial Team

Author(s): Yannique Hecht

Originally published on Towards AI.

Artificial Intelligence

Think you can explain these? Put your knowledge to the test!

[This is the 4th part of a series. Make sure you read about Search, Knowledge, and Uncertainty before continuing. The next topics are Machine Learning, Neural Networks, and Language.]

Over two-thirds of artificial intelligence’s value creation comes from improving existing systems and processes. This is according to McKinsey’s ‘Notes from the AI frontier’ (a highly-recommended resource and strategic business perspective on AI value creation).

Although this statistic includes both traditional and advanced techniques and a wide range of AI applications, its major share is rooted in simple optimization problems. Think, revamping logistics of global supply chains or raising the accuracy of financial prediction models.

AI can create $15.4 trillion in value, each year.

In Optimization, the objective is to choose the best possible from a set of predetermined options, often hidden deep in the data. To unlock incremental value, we can leverage methods like local search, linear programming, or constraint satisfaction to create practical, real-world applications.

To get you going in this exciting field, this article briefly defines the main concepts and terms.

Optimization

local search: search algorithms that maintain a single node and searches by moving to a neighboring node

state-space landscape: a process, in which successive states of an instance are considered, with the objective of finding a goal state with the desired property

global maximum: the largest overall value of a set, function, etc., over its entire range

Global Maximum

global minimum: the smallest overall value of a set, function, etc., over its entire range

Global Minimum

objective function: the function that it is to maximize (e.g., for revenues)

cost function: the function that it is to minimize (e.g., for costs); it could look like this:

50[x1] + 80[x2]

current state: (or configuration) the dynamic set of currently stored inputs, variables, and constants in memory

Current State

hill-climbing: an optimization technique that is used to find a “local optimum” solution to a computational problem; variants include steepest-ascent, stochastic, first-choice, random-restart, and local beam search

Hill Climbing

steepest-ascent: hill-climbing variant, which chooses the highest-valued neighbor

stochastic: hill-climbing variant, which chooses randomly from higher-valued neighbors

first-choice: hill-climbing variant, which chooses the first higher-valued neighbor

random-restart: hill-climbing variant, which conducts hill-climbing multiple times

local beam search: hill-climbing variant, which chooses the k highest-valued neighbors

simulated annealing: a technique used to optimize complex search algorithms

linear programming: a mathematical method to determine the best possible outcome from a defined of options or requirements, represented as linear relationships

simplex: a common linear programming algorithm

interior-point: another common linear programming algorithm

constraint satisfaction: the process of finding a solution to a set of constraints that impose conditions that the variables must satisfy; types of constraints include hard, soft, unary, and binary

Constraint Satisfaction

constraint function: a function specifying the prescribed conditions in a nonlinear programming problem, (i.e., time, labor, or input); it could look like this:

5[x1] + 2[x2] <= 100

hard constraints: constraints that must be satisfied in a correct solution

soft constraints: constraints that express some notion of which solutions are preferred over others

unary constraint: constraint involving only one variable

{A ≠ Monday}

binary constraint: constraint involving two variables

{A ≠ B}

node consistency: when all the values in a variable’s domain satisfy the variable’s unary constraints

arc consistency: when all the values in a variable’s domain satisfy the variable’s binary constraints; to make A arc-consistent with respect to B, remove elements from A’s domain until every choice for A has a possible choice for B

A {mon,tue, wed}
B {mon, tue, wed}
...
Arc Consistency

back-training search: a depth-first search algorithm that systematically assigns all possible combinations of values to the variables to check if these assignments constitute a solution

maintaining arc-consistency: an algorithm for enforcing arc-consistency every time we make a new assignment

least-constraining values heuristic: method to return variables in order by number of choices that are ruled out for neighboring variables (try least-constraining values first)

Now that you’re able to explain essential Optimization related terms, you’re hopefully more comfortable exploring this broad field further on your own.

However, you cannot complete your journey to becoming a fully-fledged AI-savvy leader without exploring other related topics, including Search, Knowledge, Uncertainty, Learning, Neural Networks, and Language.

Like What You Read? Eager to Learn More?
Follow me on
Medium or LinkedIn.

About the author:
Yannique Hecht works in the fields of combining strategy, customer insights, data, and innovation. While his career has been in the aviation, travel, finance, and technology industry, he is passionate about management. Yannique specializes in developing strategies for commercializing AI & machine learning products.

Published by Towards AI

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓