What is Perceptron in Neural Networks

[et_pb_section bb_built=”1″ _builder_version=”3.16.1″ custom_padding=”54px|0px|50px|0px|false|false” next_background_color=”#000000″][et_pb_row _builder_version=”3.16.1″][et_pb_column type=”3_5″][et_pb_text _builder_version=”3.16.1″]

Wikipedia Says The Perceptron is a Machine Learning algorithm for supervised learning of binary classifiers. It is a type of linear classifier.

[/et_pb_text][et_pb_text _builder_version=”3.16.1″]

And now we are going to see What is Perceptron and we are going to learn the Mathematics behind this neuron in the simplest way.

So, Let’s Get Started.

[/et_pb_text][/et_pb_column][et_pb_column type=”2_5″][et_pb_image src=”http://tec4tric.com/wp-content/uploads/2018/09/animated-think.gif” align=”center” _builder_version=”3.16.1″ /][/et_pb_column][/et_pb_row][et_pb_row _builder_version=”3.16.1″][et_pb_column type=”4_4″][et_pb_text _builder_version=”3.16.1″]

First of All, to Know about Perceptron you NEED to KNOWWhat is Mcculloch Pitts Neuron? “. My request to you to Read the blog post first, because, Mcculloch Pitts neuron and Perceptron are approximately Similar.

[/et_pb_text][et_pb_button button_url=”http://tec4tric.com/2018/10/mcculloch-pitts-neuron.html” url_new_window=”on” button_text=”What is Mcculloch Pitts Neuron?” button_alignment=”center” _builder_version=”3.16.1″ custom_button=”on” button_text_size=”30px” button_text_color=”#000000″ button_border_width=”2px” button_border_color=”#000000″ button_border_radius=”14px” button_font=”||||||||” button_icon=”%%52%%” button_on_hover=”off” box_shadow_style=”preset2″ button_text_size__hover=”20″ /][et_pb_code _builder_version=”3.16.1″]<script async src=”//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js”></script><!– [et_pb_line_break_holder] –><ins class=”adsbygoogle”<!– [et_pb_line_break_holder] –> style=”display:block; text-align:center;”<!– [et_pb_line_break_holder] –> data-ad-layout=”in-article”<!– [et_pb_line_break_holder] –> data-ad-format=”fluid”<!– [et_pb_line_break_holder] –> data-ad-client=”ca-pub-1750980986506231″<!– [et_pb_line_break_holder] –> data-ad-slot=”3802595016″></ins><!– [et_pb_line_break_holder] –><script><!– [et_pb_line_break_holder] –> (adsbygoogle = window.adsbygoogle || []).push({});<!– [et_pb_line_break_holder] –></script>[/et_pb_code][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section bb_built=”1″ _builder_version=”3.16.1″ prev_background_color=”#000000″ next_background_color=”#000000″][et_pb_row _builder_version=”3.16.1″][et_pb_column type=”4_4″][et_pb_text _builder_version=”3.16.1″ text_font=”||||||||” text_font_size=”29px” text_font_size_last_edited=”off|desktop” header_font=”||||||||” header_text_align=”left” header_font_size=”45px” text_orientation=”center”]

Perceptron

[/et_pb_text][et_pb_text _builder_version=”3.16.1″]

Perceptron is a more general computational model than Mcculloch Pitts Neuron.

[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row _builder_version=”3.16.1″][et_pb_column type=”3_5″][et_pb_image src=”http://tec4tric.com/wp-content/uploads/2018/10/perceptron.jpg” show_in_lightbox=”on” _builder_version=”3.16.1″ /][/et_pb_column][et_pb_column type=”2_5″][et_pb_text _builder_version=”3.16.1″] As you can see {X1, X2, X3, …, Xnare the Inputs and Y is the Output. And f and g are the Functions. There are two types of inputs, One is Excitatory Input, which is dependent and another is Inhibitory Input, which is independent input. Here {X1, X2, X3, …, Xnare the Excitatory Inputs.

Here, (w1, w2, w3, …, wn ) are the Weights.

The main difference between Mcculloch Pitts neuron and Perceptron is, an introduction of numerical weights (w1, w2, w3, … wn ) for inputs and a mechanism for learning these weights.

[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row _builder_version=”3.16.1″][et_pb_column type=”4_4″][et_pb_code _builder_version=”3.16.1″]<script async src=”//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js”></script><!– [et_pb_line_break_holder] –><ins class=”adsbygoogle”<!– [et_pb_line_break_holder] –> style=”display:block; text-align:center;”<!– [et_pb_line_break_holder] –> data-ad-layout=”in-article”<!– [et_pb_line_break_holder] –> data-ad-format=”fluid”<!– [et_pb_line_break_holder] –> data-ad-client=”ca-pub-1750980986506231″<!– [et_pb_line_break_holder] –> data-ad-slot=”5472675697″></ins><!– [et_pb_line_break_holder] –><script><!– [et_pb_line_break_holder] –> (adsbygoogle = window.adsbygoogle || []).push({});<!– [et_pb_line_break_holder] –></script>[/et_pb_code][et_pb_divider _builder_version=”3.16.1″ show_divider=”off” /][/et_pb_column][/et_pb_row][et_pb_row _builder_version=”3.16.1″][et_pb_column type=”3_5″][et_pb_image src=”http://tec4tric.com/wp-content/uploads/2018/10/perceptron1.png” show_in_lightbox=”on” use_overlay=”on” align=”center” _builder_version=”3.16.1″ /][/et_pb_column][et_pb_column type=”2_5″][et_pb_text _builder_version=”3.16.1″]

This equation is same as the  Mcculloch Pitts Neuron, Only here the Weights ( W ) are included. These Weights are going to learn and change which was not present in Mcculloch Pitts Neuron.

[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row _builder_version=”3.16.1″][et_pb_column type=”3_5″][et_pb_image src=”http://tec4tric.com/wp-content/uploads/2018/10/perceptron2.png” show_in_lightbox=”on” align=”center” _builder_version=”3.16.1″ /][/et_pb_column][et_pb_column type=”2_5″][et_pb_text _builder_version=”3.16.1″]

In this equation, I just Moved Theta ( θ ) from Right side to the Left side for simplification.

[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row _builder_version=”3.16.1″][et_pb_column type=”4_4″][et_pb_code _builder_version=”3.16.1″]<script async src=”//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js”></script><!– [et_pb_line_break_holder] –><ins class=”adsbygoogle”<!– [et_pb_line_break_holder] –> style=”display:block; text-align:center;”<!– [et_pb_line_break_holder] –> data-ad-layout=”in-article”<!– [et_pb_line_break_holder] –> data-ad-format=”fluid”<!– [et_pb_line_break_holder] –> data-ad-client=”ca-pub-1750980986506231″<!– [et_pb_line_break_holder] –> data-ad-slot=”4841320570″></ins><!– [et_pb_line_break_holder] –><script><!– [et_pb_line_break_holder] –> (adsbygoogle = window.adsbygoogle || []).push({});<!– [et_pb_line_break_holder] –></script>[/et_pb_code][et_pb_divider _builder_version=”3.16.1″ show_divider=”off” /][/et_pb_column][/et_pb_row][et_pb_row _builder_version=”3.16.1″][et_pb_column type=”3_5″][et_pb_image src=”http://tec4tric.com/wp-content/uploads/2018/10/perceptron3.png” align=”center” _builder_version=”3.16.1″ /][/et_pb_column][et_pb_column type=”2_5″][et_pb_text _builder_version=”3.16.1″] Look CAREFULLY, here we start at i = 0. Which means, 

{ W0X0 + W1X1 + W2X2 + …. + WnXn } ≥ 0

[ Now, putting W0 = – θ and X0 = 1 ], We will get, { – θ + W1X1 + W2X2 + …. + WnXn } ≥ 0 Which is Similar to the PREVIOUS equation where i starts at 1. [/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row _builder_version=”3.16.1″][et_pb_column type=”4_4″][et_pb_text _builder_version=”3.16.1″ header_font=”||||||||” header_text_align=”center” header_font_size=”53px” header_text_shadow_style=”preset1″ header_text_shadow_horizontal_length=”0.09em” header_text_shadow_vertical_length=”-0.01em” header_text_shadow_blur_strength=”0.06em”]

This W0 is called the Bias.

[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row _builder_version=”3.16.1″][et_pb_column type=”4_4″][et_pb_text _builder_version=”3.16.1″ text_font_size=”28px” header_font_size=”39px”]

NOTE THAT:

From the equation, it should be clear that even a Perceptron separates the input space into two Halves. All inputs which produce a 1 lie on one side and all inputs which produce a 0 lie one another side.  

The difference is the weights can be learned and the inputs can be real-valued.

[/et_pb_text][et_pb_text _builder_version=”3.16.1″] Source: NPTEL’s Deep Learning Course [/et_pb_text][et_pb_post_nav in_same_term=”off” _builder_version=”3.16.1″ /][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section bb_built=”1″ prev_background_color=”#ffffff” _builder_version=”3.16.1″ use_background_color_gradient=”on” background_color_gradient_start=”#3c8fd8″ custom_padding=”111px|0px|105px|0px|false|false” top_divider_style=”waves2″ bottom_divider_style=”arrow2″ bottom_divider_height=”92px” bottom_divider_repeat_last_edited=”off|desktop”][et_pb_row custom_padding=”100px|0px|40px|0px|false|false” _builder_version=”3.15″][et_pb_column type=”4_4″][et_pb_signup mailchimp_list=”sayan de|e86c68b56b” layout=”top_bottom” name_field=”on” success_message=”Thanks For Subscribing!” title=”Subscribe” description=”<br /> ” _builder_version=”3.16″ header_level=”h1″ header_font=”ABeeZee|||on|||||” header_text_align=”center” header_text_color=”#ffffff” header_font_size=”53px” header_font_size_last_edited=”on|desktop” use_background_color=”off” custom_button=”on” button_text_size=”24px” button_text_color=”#ffffff” button_font=”||||||||” button_on_hover=”off” button_text_size__hover_enabled=”off” button_one_text_size__hover_enabled=”off” button_two_text_size__hover_enabled=”off” button_text_color__hover_enabled=”off” button_one_text_color__hover_enabled=”off” button_two_text_color__hover_enabled=”off” button_border_width__hover_enabled=”off” button_one_border_width__hover_enabled=”off” button_two_border_width__hover_enabled=”off” button_border_color__hover_enabled=”off” button_one_border_color__hover_enabled=”off” button_two_border_color__hover_enabled=”off” button_border_radius__hover_enabled=”off” button_one_border_radius__hover_enabled=”off” button_two_border_radius__hover_enabled=”off” button_letter_spacing__hover_enabled=”off” button_one_letter_spacing__hover_enabled=”off” button_two_letter_spacing__hover_enabled=”off” button_bg_color__hover_enabled=”off” button_one_bg_color__hover_enabled=”off” button_two_bg_color__hover_enabled=”off” /][/et_pb_column][/et_pb_row][/et_pb_section]