Skip to content
Related Articles

Related Articles

Architecture of TensorFlow

View Discussion
Improve Article
Save Article
Like Article
  • Difficulty Level : Medium
  • Last Updated : 31 Mar, 2022

Prerequisite: Introduction to TensorFlow

TensorFlow is an end-to-end open-source platform for machine learning developed by Google with many enthusiastic open source contributors. TensorFlow is scalable and flexible to run on data centers as well as mobile phones. It can run on single-machine as well as multiple-machine in a distributed setting. In this article, we will explore the secret behind this extreme flexibility and scalability of TensorFlow.

Programming model and Basic concepts:

Each computation in TensorFlow describes a directed graph that’s composed of nodes and edges where nodes are operations/ functions and edges are input and output overflows and those functions.  

  • Inputs/ Outputs in TensorFlow are called Tensor. Tensor is nothing but a multi-dimensional array for which the underpinning element type is specified at the graph construction time.
  • The client program that uses TensorFlow generates data- flow graphs using endorsed programming languages (C or Python).
  • An operation is a function that has a name and represents an abstract computation. An operation can have attributes that should be generated and handed at the time of graph construction.
  • One frequent application of attributes is to fabricate operation polymorphic.
  • The kernel is the implementation of an operation on a specific device.
  • The client program interacts with TensorFlow system inflow by creating sessions. The session interface has extended styles to generate a computation graph and its reinforcement run() method which computes output of individual node in prosecution graph by providing needed inputs.
  • In utmost of the machine learning tasks computation graphs implemented multifold times and utmost of the ordinary tensors don’t survive after unattached accomplishment that’s why TensorFlow has variables.
  • Variable is a special kind of operation that returns a handle to a variable Tensor which can survive during multiple prosecutions of the graph.
  • Trainable Parameters like weights, biases are reposited in Tensors held in variables.

High-level Architecture

Architecture of TensorFlow

  • The first layer of TensorFlow consists of the device layer and the network layer. The device layer contains the implementation to communicate to the various devices like GPU, CPU, TPU in the operating system where TensorFlow will run. Whereas the network layer has implementations to communicate with different machines using different networking protocols in the Distributable Trainable setting.
  • The second layer of TensorFlow contains kernel implementations for applications mostly used in machine learning.
  • The third layer of TensorFlow consists of distributed master and dataflow executors. Distributed Master has the ability to distribute workloads to different devices on the system. Whereas data flow executor performs the data flow graph optimally.
  • The next layer exposes all the functionalities in the form of API which is implemented in C language. C language is chosen because it is fast, reliable, and can run on any operating system.
  • The fifth layer provides support for Python and C++ clients.
  • And the last layer of TensorFlow contains training and inference libraries implemented in python and C++.
My Personal Notes arrow_drop_up
Recommended Articles
Page :

Start Your Coding Journey Now!