Skip to content
Related Articles
Open in App
Not now

Related Articles

Communication Structure in Operating System

Improve Article
Save Article
Like Article
  • Last Updated : 04 Jan, 2023
Improve Article
Save Article
Like Article

In this article, we’ll look at the communication structure of an operating system. It provides an interface between applications and other programs so that users can use them more easily. The hardware/software resources handled by an operating system include memory, keystrokes, disk accesses, CPU cycles, and bandwidth on main communication channels (e.g., TCP/IP).

Naming and Name Resolution

The name resolution process is the process that locates a process by its name. The name resolution protocol (NRP) is an operating system’s method for resolving names into addresses so that processes can communicate with each other.

The NRP uses two different types of names:

  • Domain names: These are strings of text used to identify hosts on the Internet or within your local network. For example, “” represents a host called googlehost1 (the address for this particular host).
  • IP addresses: These are four-letter codes that uniquely identify every device connected to an IP network; they’re usually written in a dotted-decimal format like “192.”
  • The NRP uses both domain names and IP addresses to translate a name into an address. To find out how it does this, let’s look at the DNS system in more detail.
  • DNS: The Domain Name System (DNS) is distributed data that maps domain names to IP addresses. It’s made up of hundreds of servers around the world that store records for millions of domains and serve those records to anyone who asks for them. When the user type “” into the browser, it sends a query to one of these DNS servers asking “What’s the IP address for this domain?”
  • The DNS server checks its records and replies with the address for googlehost1. The browser then makes a request to this address and receives a response from googlehost1, which displays the page user requested.
  • This system works because the DNS servers use a domain name to find the IP address, rather than vice versa. This means that the user can’t simply enter an IP address into the browser’s address bar and expect it to work; instead, the user needs to enter the domain name of a website and let the DNS server find the IP address.

Routing Strategies

Routing strategies can be either source-routing or destination-routing.

  1. Source routing is a technique for sending messages to the appropriate recipient by first sending them to one node and then using that node’s address as the next hop for any further steps in message delivery. For example, if a user wants to send an email from a home computer, it is not possible for the user (or anyone else) to know where exactly on Earth your email will end up after being sent from one place; therefore, when trying this out yourself at home without using any software or services like Gmail or Hotmail. Users will see that the message sent goes through many different computers before finally reaching its destination.
  2. Destination routing is a technique for sending messages directly from one node to another by specifying the final recipient’s address in the initial message header. For example, if users want to use Gmail or Hotmail to send an email, then they need to specify which account user is using as well as the email addresses of both sender and recipient; after doing so, Google’s servers will take care of forwarding your message on its way. This method of routing is called destination routing because it specifies the final recipient’s address in the message header; however, this method can only be used when both sender and receiver have access to a centralized server (e.g., Gmail or Hotmail).

Packet Strategies

Users can send data directly to the destination, or users can use a protocol to send the data in sequences. This is called a packet strategy, and there are many different types of packet strategies:

  1. Individual – The data is sent without any intermediate steps between packets. The advantage of this method is that it allows low-latency connections between hosts with fast CPUs and high bandwidths.
  2. Sequential – The data is preordered so that it can be delivered quickly over slow links without waiting for each individual packet from its source IP address before moving on to another host’s IP address (or vice versa). This approach usually requires more processing power at both ends than individual delivery does; however, if your network has lots of congestion or other problems getting through fast enough on their own paths through routers along those paths then sequential delivery might be better than sending all packets individually because then only one path becomes congested while other paths remain clear even though they’re still carrying traffic from other sources too!
  3. Reliable – This method is used for applications that need to know that all of their data packets will be received. Reliable delivery guarantees that if a packet is lost, it will be resent until it reaches its destination. This type of delivery is used when transmitting important information such as financial transactions or medical records.
  4. Ordered – This method is used for applications that need to ensure that all data packets are received in the same order as they were transmitted. It does not guarantee delivery of all packets, but it does ensure that if a packet is lost, then it will be resent in the same order as originally sent. This type of delivery is used when transmitting data that needs to be processed in a specific order, such as audio and video files.
  5. Unordered – This method is used for applications that do not care about the order in which data packets are received. It does not guarantee delivery of all packets, but it does ensure that if one packet is lost then it will be resent in the same manner as originally sent. This type of delivery is used when transmitting data that does not need to be processed in a specific order, such as email or web browsing.

Connection Strategies

Connection-oriented or connection-less communication is a two-way process of sending data between two parties. In this case, we can say that the sender (A) establishes a connection with the receiver (B). The sender then sends messages to B and closes the connection when finished.

Connectionless communication doesn’t require any setup at all; it simply sends data as fast as possible and doesn’t care about whether receivers are available for receiving them or not. This is why it’s faster but less reliable than connection-oriented systems: if one endpoint fails, there will be no way to recover from losing data after sending an entire message block since there wasn’t any established channel between senders and receivers beforehand!

Connectionless communication isn’t just a simple messaging protocol; it’s also used for many other applications. For example, it can be used to send packets over the Internet without having any sort of connection between the sender and receiver beforehand. This means that your computer can send data to someone else’s computer without asking permission first or even knowing whether they’re available to receive it!

This is how the Internet works. Your computer sends a packet to another computer, which then forwards it on if it can’t handle it itself. The packet may go through hundreds of computers before reaching its destination!

The difference between connection-oriented and connectionless communication is like the difference between a phone call and sending an email. When the user makes a phone call, the user’s phone connects to another phone (which may be anywhere in the world) and stays connected until both parties have finished talking.


Contention occurs when two or more processes want to use the same resource at the same time. One way of resolving this is by using a queueing strategy. A queueing strategy is a way of managing contention by delaying access until one process has completed its operation and released its lock on a shared resource or resources. The other solution to resolving contention is partitioning, splitting up tasks that need to be done at different times into separate subparts so they don’t interfere with each other during execution; this can also be used as an alternative approach when there are no available resources or when sharing between multiple processes would not make sense (e.,g., for system administration purposes).

There are many different types of queues, but they all share some common properties. First, they are used to control access to a limited resource: when the resource is available, the queue will allow a process to use it; when it is not available, no process can use it until an item is removed from the queue. Second, each process has an associated identifier called a “ticket”, which allows them to get in line for using the shared resource without having to wait in front of everyone else.

This is particularly useful in a system where multiple processes may need to use the same resources, but they don’t know in advance which of them will be available at any given time. In this case, the queue allows each process to enter its ticket number so that it can get back into line when one of its tickets is called out; when a resource becomes available, all processes whose tickets have not yet been called out will be given access to it (in roughly sequential order).

The queue is a very simple data structure that can be implemented in many different ways. In this article, we will discuss three different implementations: linked lists, arrays, and circular queues. Each of these has its own advantages and disadvantages, so it’s important to understand how they work before deciding which one is right for your application.

The linked list implementation uses a linked list of ticket numbers to keep track of the order in which processes enter and exit the queue. This allows for fast insertion and removal of items from the front or back of the queue, but it’s not suitable for applications where items may need to be accessed by their position within the list (e.g., if they need to be removed by number). This is because linked lists don’t guarantee that elements will be in order.

The communication structure of an operating system can shape how it operates. The communication structure is important because it determines how the operating system handles communication between processes.

For example, if a process wants to send data to another process, then it needs to pass that data on through some kind of channel before it reaches its destination (if there is one). In order for this channel to work effectively, both sides need access permissions and other information about each other; otherwise, they won’t be able to send messages correctly or even receive them at all!

This is one of the reasons why an operating system will have a communication structure. This structure defines how processes can send data to each other, whether they need to be in the same process group or not. It also determines how much access each process has to other processes within the system; this prevents unauthorized access and ensures that no unauthorized programs can be run on a computer without permission from its owner!


We have seen that the communication structure of operating systems can shape how they operate. By looking closely at how a system uses these structures, we can see how it operates and make changes to improve performance or adapt to new technologies.

My Personal Notes arrow_drop_up
Like Article
Save Article
Related Articles

Start Your Coding Journey Now!