That's probably how pty device files get created, but you don't want to do that whenever you want a pty. Any given machine usually has a complement of pty device files already created.
Pseudo TTYs are fairly OS specific and you don't mention what you want to do this on. For a modern linux, I'd take a look at openpty(3)
. You can find working example code in the OpenSSH source code, sshpty.c. You will probably have to find code that calls pty_allocate()
to fully understand.
You're confusing two things here.
A socket is a file descriptor - a handle - given to a program so that it can use a network connection in almost the same way it uses files. The socket API is protocol-independent; sockets can be created for IPv4 connections or IPv6 ones, but (given kernel support) also for things like DECnet, AppleTalk, or read Ethernet.
Since the socket API is fairly easy to use but since talking to a process on the same machine using an actual network protocol is rather inefficient, at some point the UNIX domain socket was created to allow use of the socket API without that inefficiency. It also adds some extra features; e.g., it is possible to pass file descriptors to another process over a UNIX domain socket.
When one uses UNIX domain sockets, both processes still hold a socket, one for each side of the connection. The use of the socket is no different from, say, IPv4 sockets, apart from the initial connection setup.
One thing the socket API cannot do without is an address; it is not possible to create a socket without passing it an address to talk to, and this is no different for the UNIX domain socket. Since it's UNIX, where everything is a file anyway, it was decided to make these addresses look like filenames. And since we're already doing that, it makes sense to make these addresses appear in the file system, since that makes it easy to spot them.
Unfortunately, the name given to these things in the file system was also 'UNIX domain socket' (or at least, that's what people started calling them). They're not the actual sockets in the sense of the socket API, however; they couldn't be, since those are just a number. As such, their counterpart in an IPv4 socket is not that number, but instead the IP address and port number of the peer you're talking to.
Occasionally, I'll add that since the socket API doesn't deal with files directly, these filesystem representations aren't strictly necessary. Indeed, Linux has a concept of 'anonymous UNIX domain sockets', which are just that: UNIX domain sockets without any link in the filesystem...
Best Answer
This depends a lot on the communication mechanism.
At the most transparent end of the spectrum, processes can communicate using internet sockets (i.e. IP). Then wireshark or tcpdump can show all traffic by pointing it at the loopback interface.
At an intermediate level, traffic on pipes and unix sockets can be observed with
truss
/strace
/trace
/..., the Swiss army chainsaw of system tracing. This can slow down the processes significantly, however, so it may not be suitable for profiling.At the most opaque end of the spectrum, there's shared memory. The basic operating principle of shared memory is that accesses are completely transparent in each involved process, you only need system calls to set up shared memory regions. Tracing these memory accesses from the outside would be hard, especially if you need the observation not to perturb the timing. You can try tools like the Linux trace toolkit (requires a kernel patch) and see if you can extract useful information; it's the kind of area where I'd expect Solaris to have a better tool (but I have no knowledge of it).
If you have the source, your best option may well be to add tracing statements to key library functions. This may be achievable with
LD_PRELOAD
tricks even if you don't have the (whole) source, as long as you have enough understanding of the control flow of the part of the program that accesses the shared memory.