midterm review flashcards

  1. How is AI defined?
    • There are two dimensions:
    • thought vs action and human vs ideal.
    • human, thought: the automation of activities that we associate with thinking (decision making, problem solving,...) (make computers "think" how humans think)
    • human, action: the art of creating machines that perform functions that require intelligence when performed by people (Turing test)
    • ideal, thought: the study of the computations that make it possible to perceive, reason, and act ("right" thinking rather than human thinking)
    • ideal, action: the branch of computer science that is concerned with the automation of intelligent behavior (act based on scientific information rather than based on human actions)
  2. What is the Turing test?
    • The Turing test is was developed by Alan Turing as a test to determine when AI has been achieved.
    • The test consists of a machine, an interrogator, and another human. The interrogator conducts conversations
    • with the computer and the other human. The test is passed if the interrogator judges the AI to be human.
    • A variant, the Ultimate Turing Test, involves a VR environment (incorporates visual interpretation of behavior).
  3. Briefly describe the history of AI.
    • The neural network model was proposed by McCulloch and Pitts in 1943. Individual, mathematically modeled "neurons", could be combined into networks to simulate logical computation.
    • Minsky and Edmonds build the first neural network computer (SNARC, 1943).
    • Arthur Samuels wrote programs to play checkers and learn from experience(1952).
    • Birth of AI - convention at Dartmouth (1956).
    • Logic Theorist and General Problem Solver (GPS) developed by Allen Newell and Herbert Simon in 1956.
    • Perceptrons developed by Frank Rosenblatt in 1962.
    • 1980s - large companies start using expert systems.
    • 1986 - present -- return of neural network due to faster, parallel computers AND new learning algorithms (involving back-propagation)
    • 1987 - present -- use of Scientific Methods (mathematical models, etc), AI tackles more complex problems and working with more uncertainty (speech recognition, Bayesian network)
    • 1995 - present -- emergence of intelligent agents - agents embedded in real environments with continuous sensory inputs (robotics)
    • 2001 - present -- availability of very large data sets (data mining, deep learning, deep neural network)
  4. A neural network ususally has:
    • 1. A large number of simple neuron-like processing elements.
    • 2. A large number of weighted connections (connectomes) between the elements.
    • 3. The weights on the connections encode the knowledge of a network.
    • 4. (later development) Learning is the process of automatically adjusting the weights.
    • 5. Could be highly parallel, distributed control.
  5. What are knowledge based systems?
    Knowledge based systems were developed in the 1970s. They used knowledge (extracted from experts or modeled from theoretical information) to draw conclusions when presented with conditions.
  6. What are two examples of knowledge based systems?
    • DENDRAL - determined a molecule's structure based on the elemental formula of the molecule and the mass spectrum.
    • MYCIN - diagnosed infectious disease given information about that case. Also determined a CF - certainty factor - to indicate the level of certainty of the diagnosis.
  7. What is GPS?
    • GPS (General Problem Solver) was developed by Allen Newell and Herbert Simon in 1956. It was an early AI program designed to imitate the problem solving protocol of human beings.
    • It used MEA (Means Ends Analysis) to accomplish this.
  8. Who is Marvin Minsky?
    • *built the first neural network computer (SNARC) with Dean Edmonds.
    • *attended 2 month convention at Dartmouth where AI was born
    • *co-proved that 2 layer perceptrons cannot learn to classify relatively simple input sets such as XOR - because they are not linearly separable
  9. Who is Herbert Simon?
    • *attended 2 month convention at Dartmouth where AI was born
    • *co-developed GPS and LT with Allen Newell
  10. Who is Warren McCulloch?
    *co-proposed the artificial neural network model with Pitts in 1943
  11. Who is Arthur Samuel?
    Arthur Samuel wrote a series of checkers programs that play checkers and learn the game from experience.
  12. Who is John McCarthy?
    • *organized 2 month convention at Dartmouth where AI was born
    • *defined LISP - dominant AI language for 30 years
    • *invented time sharing
    • * described a program called Advice Taker that contained heuristics and procedures in the language rather than in the program one would write in the language
  13. What are perceptrons?
    A simple perceptron models a single neuron by taking a weighted sum of its inputs and sending the output 1 if the sum is greater than some adjustable threshold value T, 0 otherwise. They can be used for pattern classification. Two-layer perceptrons will learn to classify any set of linearly separable set of inputs.
  14. Who is Claude Shannon?
    • *attended 2 month convention at Dartmouth where AI was born
    • *wrote programs that used minimax to reduce the complexity of the game tree to play chess
  15. What is MYCIN?
    • MYCIN is a program written to diagnose infectious diseases. It was created with 20 person-years, and used production rules extracted from experts of the format:
    • IF conditions THEN conclusion CF
    • where CF is the certainty with which the diagnosis can be made.
  16. What is back-propagation?
    Back-propagation is an algorithm in which, when a terminal node is reached, the value of that outcome (win, loss, draw, etc) is used to update the "goodness" value for the game states that led to this outcome. It is used to train neural networks.
  17. Describe MEA?
    MEA stands for Means-Ends Analysis, and was used in GPS (developed by Allen Newell and Herbert Simon). In MEA, one solves a problem by considering the obstacles that stand between the initial problem state and the goal state.
  18. Who is Roger Schank?
    Roger Schank wrote a program called SHRDLU, a program that functions in a block world, with a set of blocks placed on a tabletop. The program understands sentences in English. It was an attempt to root understanding in a less artificial context.
  19. Who is Frank Rosenblatt?
    Frank Rosenblatt came up with perceptrons, each of which is a model of a neuron. It takes a weighted sum of its inputs, compares that to a threshold value of T, and outputs 1 if the weighted sum is greater than the threshold, 0 otherwise. Later improved to consider feedback from the outcome generated by this perceptron.
  20. What is an agent?
    An agent is anything that can perceive its environment through sensors and acting upon that environment through effectors.
  21. What is a sensor?
    A sensor provides information to an agent about its environment/state.
  22. What is an effector?
    An effector provides a means for the agent to act on its environment.
  23. What is meant by 'environment'?
    The world in which the agent exists and must function. For an agent that plays a game, its environment is the game board and the set of game rules. For a robotic agent, it might be a room in which the robot performs a task, a description of the task and how to do it, and a specification of the goal state(s).
  24. What is a rational agent?
    • A rational agent is an agent that acts rationally:
    • Act logically when possible.
    • Takes the best possible action in a situation where there is no provable correct thing to do.
  25. What does PEAS stand for?
    Performance Environment Actuators Sensors
  26. What is PEAS?
    It is a way of describing the problem the AI is to solve, the goal(s), the actions the agent may take, and the abilities the AI has to perceive its environment and the results of its actions on its environment.
  27. What is a strategic environment?
    A strategic environment is deterministic except for the action(s) taken by other agent(s).
  28. What is a state?
    A state is the set of factors required by the agent to act appropriately in its environment.
  29. States can be described as discrete or continuous. How can you tell them apart?
    States are discrete when the experience can be divided into separate and discrete states, each affected by previous states where as time is not an issue.
  30. What is the state space?
    It is the set of states reachable from the initial state by any sequence of actions.
  31. What is search?
    The activity of looking for a sequence of actions that achieves the goal.
  32. What is a search tree?
    • A tree expanded from the initial state (i.e., the root). Each node represents a state. This tree is generated dynamically during the search process.
    • Does the generated search tree represent the entire search space? No
  33. List the blind search strategies.
    • breadth first search
    • depth first search
    • uniformed cost search
    • depth limited search
    • iterative deepening search
  34. Describe blind methods.
    • Blind search methods use no information about the potential cost to reach the goal from a successor.
    • Blind search methods select successors based on some external strategies, regardless of what the state looks like.
    • Blind search methods are systematic; not random.
  35. Describe BFS.
    Breadth first search expands the search tree level by level, left to right, until the first goal node is encountered. It always finds an optimal solution (but not all of them).
  36. Describe DFS.
    Nodes are placed into a stack when they are discovered, causing the search to explore the tree deeper whenever possible. Optimality of first solution is not guaranteed, but all solutions are found.
  37. Describe UCS.
    Uniformed Cost Search is a version of BFS that is useful when step costs are not 1 (shortest path between two cities for example). Open is a priority queue and nodes are inserted into it in order of distance so far. In this way, UCS can find the shortest path in cases where BFS would fail (e.g., when shortest path has more hops than another path).
  38. Describe DLS - depth limited search.
    This is DFS with a predetermined depth cutoff. It only finds a solution if one exists at or above the depth cutoff. It can be tricky to guess the right depth, and what looks good at one depth might look bad at the next level down.
  39. Describe IDS - Iterative deepening search.
    IDS is an extension of depth limited search in which DLS is run until a solution is found. Time and space complexity are thus limited by the depth at which a solution can be found - which provides the added benefit of optimality.
  40. What is a heuristic?
    A heuristic is a function that returns a numeric value indicating the estimated desirability of a node n, often in terms of estimated remaining cost or distance from n to a goal.
  41. List some heuristic search strategies.
    • best first
    • greedy
    • A*
    • genetic
    • hill-climbing
    • local beam
    • simulated annealing
  42. How does a heuristic search work?
    Insert places each newly generated state (i.e., successor) in the OPEN based on the heuristic value (in some fashion) so that the state with the best score is placed in the front and expanded first.
  43. Describe best first search.
    The best first search uses a priority queue that stores nodes in order of the best f value. F value is the result of the heuristic function, and may indicate tiles out of place, Manhattan distance to goal, etc.
  44. Describe greedy search.
    In greedy search, the heuristic represents the estimated cost to get to the goal. Open is a priority queue with lowest h(n) values at the front. Optimality is not guaranteed with greedy search because the locally optimal choice may not lead to the globally optimal solution. Time and space complexity - O(b^m)
  45. Describe A* search.
    A* is a combination of UCS and greedy search methods in which the value stored with each node is f(n) = g(n) + h(n), or f(n) is the sum of the cost so far and the estimated cost to reach the goal from this node. Open is a priority queue, in ascending order. Optimal solution is found.
  46. Describe genetic search.
    An initial population is created. Pairs are selected, then crossover and mutation are applied. Each member of the population is evaluated using the fitness function. This process is repeated until a solution is found.
  47. Describe hill-climbing search.
    Hill climbing search is also known as greedy local search. It always looks at the successors and chooses the one that moves it most towards its goal. The problem is that it could get stuck at a local maximum or on a plateau (which way to gojQuery112408600344177908941_1520908850432?), or stop at a dead end.
  48. Describe local beam search.
    Start with k randomly generated states. At each step, generate successors for all k states. Keep the best k states. ***All k states share information in all steps.*** (This is better than hill climbing with restarts because the nodes share information.)
  49. Describe simulated annealing.
    • Simulated annealing introduces a 'temperature' variable that decreases over time. When temperature is high, the algorithm is more likely to select a random solution. As time passes, temperature decreases, reducing the likelihood of a "bad" move.
    • If T decreases slowly enough, then simulated annealing search will find a global optimum with probability approaching 1.
    • Widely used in VLSIyout, airline scheduling, etc.
  50. What is crossover?
    Crossover is a strategy in genetic algorithms in which two offspring are split at some point, with the 1st part of A combining with the 2nd part of B, and vice versa.
  51. What is mutation?
    The string that represents the genetic material can be mutated – one or more bits can be randomly changed. Applies to genetic algorithms.
  52. What is a fitness function?
    A fitness function is the heuristic used in genetic algorithms; it generally gives higher values for better states.
  53. Describe minimax search.
    • Minimax search is a search used in adversarial situations (strategic environments).
    • 1. Generate the entire game tree.
    • 2. Apply the utility function to each terminal state to get its value.
    • 3. Propagate the state value from leaf node up to the root,
  54. What are the levels in a minimax tree called?
    Plies
  55. What is alpha beta pruning?
    • Alpha beta pruning overcomes weakness of minimax search (exploring useless paths, looking at too many nodes, etc.)
    • The strategy is: a path that is being expanded and is clearly worse than a known path can be abandoned early.
    • Alpha-beta search – performs depth first minimax search, updates alpha and beta during the search, and prunes a sub-tree as soon as it is known to be worse than the current alpha or beta value.
  56. Describe the effectiveness of alpha beta pruning.
    • The effectiveness depends largely upon the order in which the nodes are generated.
    • Randomly generated: reduces branching factor to b^(3/4), which reduces space complexity to b^(3d/4)
    • Perfectly ordered (best moves first): reduces branching factor to b^(1/2), which reduces space complexity to b^(d/2).
  57. What are the characteristics of a decent evaluation function?
    • 1. Agree with the utility function on terminal states, i.e., evaluation function becomes utility function at end game.
    • 2. Stops within reasonable amount of time.
    • 3. Reasonably accurately reflect the actual chances of winning.
  58. What is a utility function?
    A utility function gives a numeric value for the outcome of a game (+1, 0, -1), given the state of the game and the player.
  59. Describe Monte Carlo Tree Search (MCTS).
    • MCTS uses results of simulations to guide the growth of the game tree. Uses exploitation (focus on promising moves) and exploration (focus on moves where uncertainty of evaluation is high - infrequently explored moves).
    • Basic Idea
    • Build lookahead tree (e.g. game tree)
    • Use rollouts (simulations) to generate rewards
    • Apply UCB-like formula to interior nodes of tree
    • Choose “optimistically” where to expand next
  60. Name an efficient learning algorithm used in neural nets today.
    Back-propagation inspired the re-emergence of neural nets in AI.
  61. What are the 5 different types of agents?
    • 1. Simple reflex agents
    • 2. Model-based reflexive agents
    • 3. Goal-based agents
    • 4. Utility-based agents
    • 5. Learning agents
  62. Describe simple reflex agents.
    • *Act based on conditions (sense conditions -> apply rules -> act)
    • *Simple to program
    • *No memory
    • *Efficient in terms of decision making process
    • *Not effective in many situations due to lack of memory and the world being partially observable (e.g., broken tail light)
    • *Not effective in many situations due to incomplete percepts in partially observable environment (e.g., driver may have blind spot)
  63. Describe model-based reflex agents.
    • *Has memory to save prior state(s)
    • *Action could take longer (due to this memory)
    • *Disadvantage: No goal!
  64. Describe goal-based agents.
    • *Is not limited to condition-action rules.
    • *Possible actions are evaluated based on estimations of their consequences and how these consequences would help achieve the goal (though often described simply as good/happy or bad/unhappy).
    • *Either the action with best evaluation score is selected (gunning for optimality) or
    • *Any action that gets closer to the goal is selected (more efficient, hopefully)
    • *Applicable topic – searching
    • *Advantage: flexible
    • *Disadvantage: knowledge of goal is not sufficient for all problems; need model of how world works too
    • sometimes it is less efficient
  65. Describe Utility - based agents.
    • Use a utility function (called heuristics) to calculate the “goodness” (ex., distance to the goal) of a state resulting from an action on the current state.
    • The action whose resulting state has the best score is selected.
    • Applicable topic – searching and planning
    • Advantage:
    • Most likely to find the optimal solution
    • Disadvantage:
    • Could be disastrous if the utility function is incorrect.
  66. Describe Learning agents.
    • Goal: improve performance over time.
    • 3 additional components: Critic, Learning element, Problem Generator
    • Applicable topic – machine learning (supervised or unsupervised)
  67. List the important properties of an agent's task environment.
    • *Fully observable vs. partially observable
    • *Single agent vs. multi-agents
    • *Deterministic vs. non-deterministic
    • *episodic vs. sequential
    • *discrete vs. continuous
    • *static vs. dynamic
  68. Describe fully observable vs. partially observable environments.
    Whether the sensors can detect all relevant aspects of the environment.
  69. Describe single-agent vs. multi-agent environments.
    • *how can you determine if another object is an agent?
    • *If there are multiple agents, are they cooperative or competitive?
  70. Describe deterministic vs. non-deterministic environments.
    Whether the next state is completely determined by the current state and the action taken by the agent, or somewhat determined by it, or completely random.
  71. A strategic environment is a special case of what type of environment?
    A strategic environment is *deterministic* except for the actions of other agents.
  72. Describe episodic vs. sequential environments.
    Whether the experience can be divided into separate, atomic “episodes”, each consists of the agent perceiving and then performing a single action. An episode does not affect any future episode nor will it depend on any past episodes. (e.g., card games)
  73. Describe discrete vs. continuous environments.
    Whether the experience can be divided into separate and discrete states, each affected by previous states where as time is not an issue. (e.g., chess game is discrete, taxi drive is continuous)
  74. Describe static vs. dynamic environments.
    Whether the environment can change while an agent is deliberating.
  75. List the 4 types of problems, as defined by the environment.
    • 1. single state problem
    • 2. multiple state problem
    • 3. contingency problem
    • 4. exploration
  76. Describe single state problems.
    Single state problems are fully observable and deterministic. The current state and the action determine the successor state. E.g., 8 puzzle.
  77. Describe multiple state problems.
    Multiple state problems are deterministic, but the environment is only partially observable (or non-deterministic and fully observable). The current state and the action determine *possible* successor states. E.g., backgammon, chess.
  78. Describe contingency problems.
    In a contingency problem, each branch of a tree must deal with a possible contingency situation.
  79. Describe exploration problems.
    Exploration problems are non-deterministic - the agent does not know the effects of its actions.
  80. What are 4 characteristics to consider when comparing search strategies?
    • *completeness - is the algorithm guaranteed to find a solution if one exists?
    • *time complexity - how long does it take to find a solution (is there an established upper bound)?
    • *space complexity - how much memory is needed to perform the search?
    • *optimality - does the strategy find the best solution (e.g., shortest path)?
  81. Describe the difference between Greedy search method and hill climbing.
    The greedy method chooses the successor with the lowest estimated cost to reach the goal from open at every step. Hill climbing only considers the successors of the current node. As such, it could get stuck at a local max (or min), or lost on a plateau, or stuck at a dead end.
  82. What are the weaknesses of hill climbing?
    • The weaknesses of hill climbing (and their remedies) are:
    • 1) Get stuck at local max or min (random restarts solve this)
    • 2) Get stuck at dead end (random restarts fix this)
    • 3) Get lost on plateau (random restarts can solve this)
    • Local beam is an improvement on hill climbing in which the k states talk to each other also solve all of these problems.
  83. Design simple heuristics.
  84. Determine characteristics of various heuristics.
  85. Perform hill climbing search on paper.
  86. Perform simulated annealing search on paper.
  87. Perform local beam search on paper.
  88. Perform genetic search on paper.
  89. **Perform alpha-beta search (minimax search with alpha-beta pruning) on paper. Understand how alpha and beta are used in the pruning process.
Author
dimeng
ID
338741
Card Set
midterm review flashcards
Description
AI midterm review
Updated