The general way to use involves constructing a Vt pipeline structure and adding one or more Vt pipecmd structures to it. A Vt pipecmd represents a subprocess (or ``command )'' while a Vt pipeline represents a sequence of subprocesses each of whose outputs is connected to the next one's input, as in the example ls | grep pattern | less The calling program may adjust certain properties of each command independently, such as its environment and nice(3) priority, as well as properties of the entire pipeline such as its input and output and the way signals are handled while executing it. The calling program may then start the pipeline, read output from it, wait for it to complete, and gather its exit status.
Strings passed as Vt const char * function arguments will be copied by the library.
Construct a new command representing execution of a program called name
Convenience constructors wrapping Fn pipecmd_new and Fn pipecmd_arg . Construct a new command representing execution of a program called name with arguments. Terminate arguments with NULL
Split argstr on whitespace to construct a command and arguments, honouring shell-style single-quoting, double-quoting, and backslashes, but not other shell evilness like wildcards, semicolons, or backquotes. This is included only to support situations where command arguments are encoded into configuration files and the like. While it is safer than system(3), it still involves significant string parsing which is inherently riskier than avoiding it altogether. Please try to avoid using it in new code.
The data argument is passed as the function's only argument, and will be freed before returning using free_func (if non- NULL ).
pipecmd_* functions that deal with arguments cannot be used with the command returned by this function.
pipecmd_* functions that deal with arguments cannot be used with the command returned by this function.
Return a new command that just passes data from its input to its output.
Return a duplicate of a command.
Add an argument to a command.
Convenience function to add an argument with printf substitutions.
Convenience functions wrapping Fn pipecmd_arg to add multiple arguments at once. Terminate arguments with NULL
Split argstr on whitespace to add a list of arguments, honouring shell-style single-quoting, double-quoting, and backslashes, but not other shell evilness like wildcards, semicolons, or backquotes. This is included only to support situations where command arguments are encoded into configuration files and the like. While it is safer than system(3), it still involves significant string parsing which is inherently riskier than avoiding it altogether. Please try to avoid using it in new code.
Return the number of arguments to this command. Note that this includes the command name as the first argument, so the command `echo foo bar' is counted as having three arguments.
Set the nice(3) value for this command. Defaults to 0. Errors while attempting to set the nice value are ignored, aside from emitting a debug message.
If discard_err is non-zero, redirect this command's standard error to /dev/null Otherwise, and by default, pass it through. This is usually a bad idea.
Unset environment variable name while running this command.
Clear the environment while running this command. (Note that environment operations work in sequence; pipecmd_clearenv followed by pipecmd_setenv causes the command to have just a single environment variable set.)
Add a command to a sequence created using Fn pipecmd_new_sequence .
Dump a string representation of a command to stream.
Return a string representation of a command. The caller should free the result.
Execute a single command, replacing the current process. Never returns, instead exiting non-zero on failure.
Destroy a command. Safely does nothing if cmd is NULL
Construct a new pipeline.
Convenience constructors wrapping Fn pipeline_new and Fn pipeline_command . Construct a new pipeline consisting of the given list of commands. Terminate commands with NULL
Joins two pipelines, neither of which are allowed to be started. Discards Fa want_out , Fa want_outfile , and Fa outfd from p1 and Fa want_in , Fa want_infile , and Fa infd from p2
Connect the input of one or more sink pipelines to the output of a source pipeline. The source pipeline may be started, but in that case Fn pipeline_want_out must have been called with a negative Fa fd ; otherwise, calls Fn pipeline_want_out source -1 . In any event, calls Fn pipeline_want_in sink -1 on all sinks, none of which are allowed to be started. Terminate arguments with NULL
This is an application-level connection; data may be intercepted between the pipelines by the program before calling Fn pipeline_pump , which sets data flowing from the source to the sinks. It is primarily useful when more than one sink pipeline is involved, in which case the pipelines cannot simply be concatenated into one.
The result is similar to tee(1), except that output can be sent to more than two places and can easily be sent to multiple processes.
Add a command to a pipeline.
Construct a new command from a shell-quoted string and add it to a pipeline in one go. See the comment against Fn pipecmd_new_argstr above if you're tempted to use this function.
Convenience functions wrapping Fn pipeline_command to add multiple commands at once. Terminate arguments with NULL
Set file descriptors to use as the input and output of the whole pipeline. If non-negative, fd is used directly as a file descriptor. If negative, Fn pipeline_start will create pipes and store the input writing half and the output reading half in the pipeline's Fa infd or Fa outfd field as appropriate. The default is to leave input and output as stdin and stdout unless Fn pipeline_want_infile or Fn pipeline_want_outfile respectively has been called.
Calling these functions supersedes any previous call to Fn pipeline_want_infile or Fn pipeline_want_outfile respectively.
Set file names to open and use as the input and output of the whole pipeline. This may be more convenient than supplying file descriptors, and guarantees that the files are opened with the same privileges under which the pipeline is run.
Calling these functions (even with NULL which returns to the default of leaving input and output as stdin and stdout) supersedes any previous call to Fn pipeline_want_in or Fn pipeline_want_outfile respectively.
If ignore_signals is non-zero, ignore SIGINT and SIGQUIT in the calling process while the pipeline is running, like system(3). Otherwise, and by default, leave their dispositions unchanged.
Return the number of commands in this pipeline.
Return command number n from this pipeline, counting from zero, or NULL if n is out of range.
Return the process ID of command number n from this pipeline, counting from zero. The pipeline must be started. Return -1 if n is out of range or if the command has already exited and been reaped.
Get streams corresponding to Fa infd and Fa outfd respectively. The pipeline must be started.
Dump a string representation of p to stream.
Return a string representation of p The caller should free the result.
Destroy a pipeline and all its commands. Safely does nothing if p is NULL May wait for the pipeline to complete if it has not already done so.
Install a post-fork handler. This will be run in any child process immediately after it is forked. For instance, this may be used for cleaning up application-specific signal handlers. Pass NULL to clear any existing post-fork handler.
Start the processes in a pipeline. Installs this library's SIGCHLD handler if not already installed. Calls error (FATAL) on error.
Wait for a pipeline to complete and return the exit status.
Start a pipeline, wait for it to complete, and free it, all in one go.
Pump data among one or more pipelines connected using Fn pipeline_connect until all source pipelines have reached end-of-file and all data has been written to all sinks (or failed). All relevant pipelines must be supplied: that is, no pipeline that has been connected to a source pipeline may be supplied unless that source pipeline is also supplied. Automatically starts all pipelines if they are not already started, but does not wait for them. Terminate arguments with NULL
Read len bytes of data from the pipeline, returning the data block. len is updated with the number of bytes read.
Look ahead in the pipeline's output for len bytes of data, returning the data block. len is updated with the number of bytes read. The starting position of the next read or peek is not affected by this call.
Return the number of bytes of data that can be read using Fn pipeline_read or Fn pipeline_peek solely from the peek cache, without having to read from the pipeline itself (and thus potentially block).
Skip over and discard len bytes of data from the peek cache. Asserts that enough data is available to skip, so you may want to check using Fn pipeline_peek_size first.
Read a line of data from the pipeline, returning it.
Look ahead in the pipeline's output for a line of data, returning it. The starting position of the next read or peek is not affected by this call.
If the Fa ignore_signals flag is set in a pipeline (which is the default), then the SIGINT and SIGQUIT signals will be ignored in the parent process while child processes are running. This mirrors the behaviour of system(3).
leaves child processes with the default disposition of SIGPIPE namely to terminate the process. It ignores SIGPIPE in the parent process while running Fn pipeline_pump .
You should not rely on this behaviour, and in future it may be modified either to reap only child processes created by this library or to provide a way to return foreign statuses to the application. Please contact the author if you have an example application and would like to help design such an interface.
The simplest case is simple. To run a single command, such as mv source dest
pipeline *p = pipeline_new_command_args ("mv", source, dest, NULL); int status = pipeline_run (p);
is often used to mimic shell pipelines, such as the following example:
zsoelim < input-file | tbl | nroff -mandoc -Tutf8
The code to construct this would be:
pipeline *p; int status; p = pipeline_new (); pipeline_want_infile (p, "input-file"); pipeline_command_args (p, "zsoelim", NULL); pipeline_command_args (p, "tbl", NULL); pipeline_command_args (p, "nroff", "-mandoc", "-Tutf8", NULL); status = pipeline_run (p);
You might want to construct a command more dynamically:
pipecmd *manconv = pipecmd_new_args ("manconv", "-f", from_code, "-t", "UTF-8", NULL); if (quiet) pipecmd_arg (manconv, "-q"); pipeline_command (p, manconv);
Perhaps you want an environment variable set only while running a certain command:
pipecmd *less = pipecmd_new ("less"); pipecmd_setenv (less, "LESSCHARSET", lesscharset);
You might find yourself needing to pass the output of one pipeline to several other pipelines, in a ``tee'' arrangement:
pipeline *source, *sink1, *sink2; source = make_source (); sink1 = make_sink1 (); sink2 = make_sink2 (); pipeline_connect (source, sink1, sink2, NULL); /* Pump data among these pipelines until there's nothing left. */ pipeline_pump (source, sink1, sink2, NULL); pipeline_free (sink2); pipeline_free (sink1); pipeline_free (source);
Maybe one of your commands is actually an in-process function, rather than an external program:
pipecmd *inproc = pipecmd_new_function ("in-process", &func, NULL, NULL); pipeline_command (p, inproc);
Sometimes your program needs to consume the output of a pipeline, rather than sending it all to some other subprocess:
pipeline *p = make_pipeline (); const char *line; pipeline_want_out (p, -1); pipeline_start (p); line = pipeline_peekline (p); if (!strstr (line, "coding: UTF-8")) printf ("Unicode text follows:); while (line = pipeline_readline (p)) printf (" %s", line); pipeline_free (p);
is licensed under the GNU General Public License, version 3 or later. See the README file for full details.