## π Deep Learning #1 Introduction

The true challenge to artificial intelligence proved to be solving the tasks that are easy for people but hard for people to describe formally.

## π― Words & Phrases

- Knowledge base
- Logistic regression
- Naive Bayes
- Feature
- Representation learning
- Factors of variation
- Multilayer perceptron (MLP)
- Visible layer => input layer
- Deep probabilistic model

## π¬ About Deep Learning

The difficulties faced by systems relying on hard-coded knowledge suggusted that AI systems need the ability to acquire their own knowledge, by extracting patterns from row data.

## π Representation Matters

The performance of these simple machine learning algorithms, like *logistic regression*, *naive Bayes* etc, depends heavily on the representation of the data they are given.

One solution to this problem is to use machine learning to discover not only the mapping from representation to output but also the representation itself. This approach is known as *representation learning*.

## π Factors of Variation

A major source of difficult in many real-world artifical intelligence applications is that many of the factors of variation influence every single piece of data we are able to observe.

## β Why Deep Learning

*Deep learning* solves this central problem, hard to obtain a representation, in representation learning by introducing representations that are expressed in terms of other, simpler representations.

- Allows computers to build complex concepts out of simpler concepts by combining simpler concepts, like
*multilayer perceptron* - Depth allows computers to learn a multi-step computer program

## π Depth of a Model

There are two main ways of measuring the depth of a model as following.

### The Depth of the Computational Graph

Based on the number of sequential instructions that must be executed to evaluate the architecture. We can think of this as the length of the longest path through a flow chart that describes how to compute each of the model's ouputs given its inputs.

### The Depth of the Probabilistic Modeling Graph

Another approach, used by *deep probabilistic models*, regards the depth of a model as being not the depth of the computational graph but the depth of the graph describing how concepts are related to each other. In this case, the depth of the flowchart of the computations needed to compute the representation of each concept may be much deeper than the graph of the concepts themselves. The graph of computations includes $2n$ layers if we refine our estimate of each concept given the other $n$ times.

## π€¦π»ββοΈ New Words

- π€°π» uterine => means relating to the uterus of a woman or female mammal
- π£ vocal tract
- fender of a car π
- quintessential => typical
- π₯ silhouette => outline
- ππ»ββοΈ insurmountable => insuperable => too great to overcome
- β
Images
*repreoduced with permission from*xxx.