What is Perceptron in Neural Networks
[et_pb_section bb_built=”1″ _builder_version=”3.16.1″ custom_padding=”54px|0px|50px|0px|false|false” next_background_color=”#000000″][et_pb_row _builder_version=”3.16.1″][et_pb_column type=”3_5″][et_pb_text _builder_version=”3.16.1″]
Wikipedia Says The Perceptron is a Machine Learning algorithm for supervised learning of binary classifiers. It is a type of linear classifier.
[/et_pb_text][et_pb_text _builder_version=”3.16.1″]And now we are going to see What is Perceptron and we are going to learn the Mathematics behind this neuron in the simplest way.
So, Let’s Get Started.
[/et_pb_text][/et_pb_column][et_pb_column type=”2_5″][et_pb_image src=”http://tec4tric.com/wp-content/uploads/2018/09/animated-think.gif” align=”center” _builder_version=”3.16.1″ /][/et_pb_column][/et_pb_row][et_pb_row _builder_version=”3.16.1″][et_pb_column type=”4_4″][et_pb_text _builder_version=”3.16.1″]First of All, to Know about Perceptron you NEED to KNOW ” What is Mcculloch Pitts Neuron? “. My request to you to Read the blog post first, because, Mcculloch Pitts neuron and Perceptron are approximately Similar.
[/et_pb_text][et_pb_button button_url=”http://tec4tric.com/2018/10/mcculloch-pitts-neuron.html” url_new_window=”on” button_text=”What is Mcculloch Pitts Neuron?” button_alignment=”center” _builder_version=”3.16.1″ custom_button=”on” button_text_size=”30px” button_text_color=”#000000″ button_border_width=”2px” button_border_color=”#000000″ button_border_radius=”14px” button_font=”||||||||” button_icon=”%%52%%” button_on_hover=”off” box_shadow_style=”preset2″ button_text_size__hover=”20″ /][et_pb_code _builder_version=”3.16.1″]<script async src=”//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js”></script><!– [et_pb_line_break_holder] –><ins class=”adsbygoogle”<!– [et_pb_line_break_holder] –> style=”display:block; text-align:center;”<!– [et_pb_line_break_holder] –> data-ad-layout=”in-article”<!– [et_pb_line_break_holder] –> data-ad-format=”fluid”<!– [et_pb_line_break_holder] –> data-ad-client=”ca-pub-1750980986506231″<!– [et_pb_line_break_holder] –> data-ad-slot=”3802595016″></ins><!– [et_pb_line_break_holder] –><script><!– [et_pb_line_break_holder] –> (adsbygoogle = window.adsbygoogle || []).push({});<!– [et_pb_line_break_holder] –></script>[/et_pb_code][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section bb_built=”1″ _builder_version=”3.16.1″ prev_background_color=”#000000″ next_background_color=”#000000″][et_pb_row _builder_version=”3.16.1″][et_pb_column type=”4_4″][et_pb_text _builder_version=”3.16.1″ text_font=”||||||||” text_font_size=”29px” text_font_size_last_edited=”off|desktop” header_font=”||||||||” header_text_align=”left” header_font_size=”45px” text_orientation=”center”]Perceptron
[/et_pb_text][et_pb_text _builder_version=”3.16.1″]Perceptron is a more general computational model than Mcculloch Pitts Neuron.
[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row _builder_version=”3.16.1″][et_pb_column type=”3_5″][et_pb_image src=”http://tec4tric.com/wp-content/uploads/2018/10/perceptron.jpg” show_in_lightbox=”on” _builder_version=”3.16.1″ /][/et_pb_column][et_pb_column type=”2_5″][et_pb_text _builder_version=”3.16.1″] As you can see {X1, X2, X3, …, Xn} are the Inputs and Y is the Output. And f and g are the Functions. There are two types of inputs, One is Excitatory Input, which is dependent and another is Inhibitory Input, which is independent input. Here {X1, X2, X3, …, Xn} are the Excitatory Inputs.Here, (w1, w2, w3, …, wn ) are the Weights.
The main difference between Mcculloch Pitts neuron and Perceptron is, an introduction of numerical weights (w1, w2, w3, … wn ) for inputs and a mechanism for learning these weights.
[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row _builder_version=”3.16.1″][et_pb_column type=”4_4″][et_pb_code _builder_version=”3.16.1″]<script async src=”//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js”></script><!– [et_pb_line_break_holder] –><ins class=”adsbygoogle”<!– [et_pb_line_break_holder] –> style=”display:block; text-align:center;”<!– [et_pb_line_break_holder] –> data-ad-layout=”in-article”<!– [et_pb_line_break_holder] –> data-ad-format=”fluid”<!– [et_pb_line_break_holder] –> data-ad-client=”ca-pub-1750980986506231″<!– [et_pb_line_break_holder] –> data-ad-slot=”5472675697″></ins><!– [et_pb_line_break_holder] –><script><!– [et_pb_line_break_holder] –> (adsbygoogle = window.adsbygoogle || []).push({});<!– [et_pb_line_break_holder] –></script>[/et_pb_code][et_pb_divider _builder_version=”3.16.1″ show_divider=”off” /][/et_pb_column][/et_pb_row][et_pb_row _builder_version=”3.16.1″][et_pb_column type=”3_5″][et_pb_image src=”http://tec4tric.com/wp-content/uploads/2018/10/perceptron1.png” show_in_lightbox=”on” use_overlay=”on” align=”center” _builder_version=”3.16.1″ /][/et_pb_column][et_pb_column type=”2_5″][et_pb_text _builder_version=”3.16.1″]This equation is same as the Mcculloch Pitts Neuron, Only here the Weights ( W ) are included. These Weights are going to learn and change which was not present in Mcculloch Pitts Neuron.
[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row _builder_version=”3.16.1″][et_pb_column type=”3_5″][et_pb_image src=”http://tec4tric.com/wp-content/uploads/2018/10/perceptron2.png” show_in_lightbox=”on” align=”center” _builder_version=”3.16.1″ /][/et_pb_column][et_pb_column type=”2_5″][et_pb_text _builder_version=”3.16.1″]In this equation, I just Moved Theta ( θ ) from Right side to the Left side for simplification.
[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row _builder_version=”3.16.1″][et_pb_column type=”4_4″][et_pb_code _builder_version=”3.16.1″]<script async src=”//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js”></script><!– [et_pb_line_break_holder] –><ins class=”adsbygoogle”<!– [et_pb_line_break_holder] –> style=”display:block; text-align:center;”<!– [et_pb_line_break_holder] –> data-ad-layout=”in-article”<!– [et_pb_line_break_holder] –> data-ad-format=”fluid”<!– [et_pb_line_break_holder] –> data-ad-client=”ca-pub-1750980986506231″<!– [et_pb_line_break_holder] –> data-ad-slot=”4841320570″></ins><!– [et_pb_line_break_holder] –><script><!– [et_pb_line_break_holder] –> (adsbygoogle = window.adsbygoogle || []).push({});<!– [et_pb_line_break_holder] –></script>[/et_pb_code][et_pb_divider _builder_version=”3.16.1″ show_divider=”off” /][/et_pb_column][/et_pb_row][et_pb_row _builder_version=”3.16.1″][et_pb_column type=”3_5″][et_pb_image src=”http://tec4tric.com/wp-content/uploads/2018/10/perceptron3.png” align=”center” _builder_version=”3.16.1″ /][/et_pb_column][et_pb_column type=”2_5″][et_pb_text _builder_version=”3.16.1″] Look CAREFULLY, here we start at i = 0. Which means,{ W0X0 + W1X1 + W2X2 + …. + WnXn } ≥ 0
[ Now, putting W0 = – θ and X0 = 1 ], We will get, { – θ + W1X1 + W2X2 + …. + WnXn } ≥ 0 Which is Similar to the PREVIOUS equation where i starts at 1. [/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row _builder_version=”3.16.1″][et_pb_column type=”4_4″][et_pb_text _builder_version=”3.16.1″ header_font=”||||||||” header_text_align=”center” header_font_size=”53px” header_text_shadow_style=”preset1″ header_text_shadow_horizontal_length=”0.09em” header_text_shadow_vertical_length=”-0.01em” header_text_shadow_blur_strength=”0.06em”]