inviso_lfm

An Inviso Off-Line Logfile Merger

Implements an off-line logfile merger, merging binary trace-log files from several nodes together in chronological order. The logfile merger can also do pid-to-alias translations.

The logfile merger is supposed to be called from the Erlang shell or a higher layer trace tool. For it to work, all logfiles and trace information files (containing the pid-alias associations) must be located in the file system accessible from this node and organized according to the API description.

The logfile merger starts a process, the output process, which in its turn starts one reader process for every node it shall merge logfiles from. Note that the reason for a process for each node is not remote communication, the logfile merger is an off-line utility, it is to sort the logfile entries in chronological order.

The logfile merger can be customized both when it comes to the implementation of the reader processes and the output the output process shall generate for every logfile entry.

Functions


merge(Files, OutFile) ->

merge(Files, WorkHFun, InitHandlerData) ->

merge(Files, BeginHFun, WorkHFun, EndHFun, InitHandlerData) -> {ok, Count} | {error, Reason}

  • Files = [FileDescription]
  •  FileDescription = FileSet | {reader,RMod,RFunc,FileSet}
  •   FileSet = {Node,LogFiles} | {Node,[LogFiles]}
  •    Node = atom()
  •    LogFiles = [{trace_log,[FileName]}] | [{trace_log,[FileName]},{ti_log,TiFileSpec}]
  •     TiFileSpec = [string()] - a list of one string.
  •     FileName = string()
  •   RMod = RFunc = atom()
  • OutFile = string()
  • BeginHFun = fun(InitHandlerData) -> {ok, NewHandlerData} | {error, Reason}
  • WorkHFun = fun(Node, LogEntry, PidMappings, HandlerData) -> {ok, NewHandlerData}
  •  LogEntry = tuple()
  •  PidMappings = term()
  • EndHFun = fun(HandlerData) -> ok | {error, Reason}
  • Count = int()
  • Reason = term()

Merges the logfiles in Files together into one file in chronological order. The logfile merger consists of an output process and one or several reader processes.

Returns {ok, Count} where Count is the total number of log entries processed, if successful.

When specifying LogFiles, currently the standard reader-process only supports:

  • one single file
  • a list of wraplog files, following the naming convention <Prefix><Nr><Suffix>.

Note that (when using the standard reader process) it is possible to give a list of LogFiles. The list must be sorted starting with the oldest. This will cause several trace-logs (from the same node) to be merged together in the same OutFile. The reader process will simply start reading the next file (or wrapset) when the previous is done.

FileDescription == {reader,RMod,RFunc,FileSet} indicates that spawn(RMod, RFunc, [OutputPid,LogFiles]) shall create a reader process.

The output process is customized with BeginHFun, WorkHFun and EndHFun. If using merge/2 a default output process configuration is used, basically creating a text file and writing the output line by line. BeginHFun is called once before requesting log entries from the reader processes. WorkHFun is called for every log entry (trace message) LogEntry. Here the log entry typically gets written to the output. PidMappings is the translations produced by the reader process. EndHFun is called when all reader processes have terminated.

Currently the standard reader can only handle one ti-file (per LogFiles). The current inviso meta tracer is further not capable of wrapping ti-files. (This also because a wrapped ti-log will most likely be worthless since alias associations done in the beginning are erased but still used in the trace-log).

The standard reader process is implemented in the module inviso_lfm_tpreader (trace port reader). It understands Erlang linked in trace-port driver generated trace-logs and inviso_rt_meta generated trace information files.

Writing Your Own Reader Process

Writing a reader process is not that difficult. It must:

The reader process must of course understand the format of a logfile written by the runtime component.