Mohit Sharma
·
Follow
11 min read
·
Dec 27, 2023
--
Interprocess CommunicationThere are several methods for communication within a single machine. These methods are known as Interprocess Communication (IPC) and allow different processes to communicate with each other. Some common methods of IPC include Pipes, Named Pipes, Message Queues, Shared Memory, Remote Procedure Calls (RPC), Semaphores and Sockets.
pipe()
system call in Unix-like operating systems. This creates a pair of file descriptors: one for the read end (receiving) of the pipe and one for the write end (sending) of the pipe. These file descriptors allow processes to interact with the pipe, but they do not directly represent the buffer managed by the kernel.mkfifo(
) system call provided by the operating system. Once created, the named pipe file appears in the file system, and processes can interact with it like they would with any other file.Message Queues allows processes to exchange data in the form of messages between two processes. It allows processes to communicate asynchronously by sending messages to each other where the messages are stored in a queue, waiting to be processed, and are deleted after being processed.
msgget()
which takes a key (message queue identifier) and creates or opens a message queue associated with that key. If the message queue doesn't already exist, it's created; if it does exist, the process gains access to it.msgsnd()
system call. It specifies the message queue's identifier, the message type, the data to be sent, and some control flags. The message is then added to the message queue, waiting for the recipient process to retrieve it.msgrcv()
system call. It specifies the message queue's identifier, the message type it wants to receive, a buffer to hold the message data, and other control flags. The kernel retrieves the message from the message queue that matches the specified type and copies it into the buffer.msgctl()
system call. If no processes are using the message queue, it can be removed from the system using the same system call.Message queues and pipes are both mechanisms for inter-process communication (IPC), but they differ in several important ways:
1. Communication Mechanism:
2. Data Format:
3. Synchronization:
4. Directionality:
5. Persistence:
6. Process Relationships:
In essence, message queues are more versatile and suitable for structured communication between processes, whereas pipes are simpler and are often used for direct data streaming between related processes. The choice between message queues and pipes depends on the complexity of the data being exchanged, the synchronization requirements, and the relationship between the communicating processes.
A memory section is shared between different processes. In other words, one process writes to this memory and another process can read from this memory. This allows for fast communication between processes as data doesn’t have to be copied around. However, due to the potential for synchronization issues, careful management of synchronization mechanisms is crucial to maintaining data integrity and preventing conflicts.
Let’s dive into the practical aspects of how shared memory works in Unix-like systems:
1. Creating Shared Memory:
To use shared memory, you first need to create it. This is typically done using the shmget()
system call, which allocates a shared memory segment. The call takes parameters such as a key (an identifier for the shared memory) and the size of the memory segment you want to create.
Example: int shmid = shmget(key, size, IPC_CREAT | 0666);
2. Attaching to Shared Memory:
Once the shared memory is created, processes that want to use it need to attach to it. This is done using the shmat()
system call. It returns a pointer to the shared memory segment, allowing the process to access it.
Example: void *shared_memory = shmat(shmid, NULL, 0);
3. Sharing Data:
With the shared memory attached, processes can now read from and write to the shared memory region just like any other memory. Data written by one process can be immediately accessed by another process attached to the same segment.
Example: strcpy(shared_memory, "Hello from Process 1!");
4. Synchronization:
Shared memory can be accessed by multiple processes simultaneously, which can lead to race conditions. Synchronization mechanisms like semaphores or mutexes are used to ensure data consistency.
Example: Using a semaphore to control access to the shared memory.
5. Detaching from Shared Memory:
When a process is done using the shared memory, it should detach from it using the shmdt()
system call.
Example: shmdt(shared_memory);
6. De-allocating Shared Memory:
When shared memory is no longer needed by any process, it should be de-allocated using the shmctl()
system call with the IPC_RMID
command.
Example: shmctl(shmid, IPC_RMID, NULL);
7. Permissions and Ownership:
Shared memory segments, like other IPC resources, have ownership and permissions associated with them. Proper permissions ensure that only authorized processes can access the shared memory.
8. Error Handling:
System calls related to shared memory return error codes that should be checked to handle various scenarios, such as when creating or attaching to shared memory fails.
9. Process Independence:
Shared memory is not limited by process hierarchy. Different processes can access the shared memory as long as they have the required permissions.
10. Cleaning Up:
It’s important to properly clean up shared memory resources when they are no longer needed. Detaching and de-allocating shared memory segments prevents resource leaks.
Semaphores are a synchronization mechanism used to coordinate the activities of multiple processes in a computer system. They are used to enforce mutual exclusion, avoid race conditions and implement synchronization between processes.
Semaphores provide two operations: wait (P) and signal (V). The wait operation decrements the value of the semaphore, and the signal operation increments the value of the semaphore. When the value of the semaphore is zero, any process that performs a wait operation will be blocked until another process performs a signal operation.
Semaphores are used to implement critical sections, which are regions of code that must be executed by only one process at a time. By using semaphores, processes can coordinate access to shared resources, such as shared memory or I/O devices.
Semaphores differ from other IPC methods such as Pipes, Message Queues and Shared Memory in that they are not used for direct communication between processes. Instead, they are used to coordinate access to shared resources and ensure that only one process can access a shared resource at a time.
Sockets are an inter-process communication (IPC) mechanism that allows two or more processes to communicate with each other by creating a bidirectional channel between them. A socket is one endpoint of a two-way communication link between two programs running on the network. The socket mechanism provides a means of IPC by establishing named contact points between which the communication takes place.
POSIX sockets are a type of socket available in the POSIX API. There are two types of POSIX sockets: IPC sockets (aka Unix domain sockets) and network sockets. IPC sockets enable channel-based communication for processes on the same physical device (host), whereas network sockets enable this kind of IPC for processes that can run on different hosts, thereby bringing networking into play.
POSIX sockets differ from other IPC methods such as Pipes, Message Queues and Shared Memory in that they can be used for both local and network communication.
Software that provides services to applications beyond those generally available at the operating system.
Middleware is software that lies between an operating system and the applications running on it. It enables communication and data management for distributed applications. Some common examples of middleware include database middleware, application server middleware, message-oriented middleware, web middleware, and transaction-processing monitors.
Each program typically provides messaging services so that different applications can communicate using messaging frameworks like simple object access protocol (SOAP), web services, representational state transfer (REST), and JavaScript object notation (JSON). While all middleware performs communication functions, the type a company chooses to use will depend on what service is being used and what type of information needs to be communicated
Google Protocol Buffers (protobuf) is a language-neutral, platform-neutral extensible mechanism for serializing structured data. It was developed by Google for internal use and provided a code generator for multiple languages under an open-source license. The design goals for Protocol Buffers emphasized simplicity and performance. In particular, it was designed to be smaller and faster than XML.
Hope you enjoyed reading. I’m always open to suggestions and new ideas. Please write to me :)
Prerequisite – Inter-Process Communication
1. Shared Memory Model:
In this IPC model, a shared memory region is established which is used by the processes for data communication. This memory region is present in the address space of the process which creates the shared memory segment. The processes that want to communicate with this process should attach this memory segment into their address space.
2. Message Passing Model:
In this model, the processes communicate with each other by exchanging messages. For this purpose, a communication link must exist between the processes and it must facilitate at least two operations send (message) and receive (message). The size of messages may be variable or fixed.
Difference between Shared Memory Model and Message Passing Model in IPC :
S.No Shared Memory Model Message Passing Model 1. The shared memory region is used for communication. A message passing facility is used for communication. 2. It is used for communication between processes on a single processor or multiprocessor systems where the communicating processes reside on the same machine as the communicating processes share a common address space. It is typically used in a distributed environment where communicating processes reside on remote machines connected through a network. 3. The code for reading and writing the data from the shared memory should be written explicitly by the Application programmer. No such code required here as the message passing facility provides mechanism for communication and synchronization of actions performed by the communicating processes. 4. It provides a maximum speed of computation as communication is done through shared memory so system calls are made only to establish the shared memory. It is time-consuming as message passing is implemented through kernel intervention (system calls). 5. Here the processes need to ensure that they are not writing to the same location simultaneously. It is useful for sharing small amounts of data as conflicts need not to be resolved. 6. Faster communication strategy. Relatively slower communication strategy. 7. No kernel intervention. It involves kernel intervention. 8. It can be used in exchanging larger amounts of data. It can be used in exchanging small amounts of data. 9.Example-
Example-
Like Article
Suggest improvement
Share your thoughts in the comments
Login
to comment...