# Logging TStorage uses custom logging format. In order to parse log files, there is a special tool - logparser to decode logs from binary to human-readable format. > **NOTE** Parsing logs requires computational power, use carefully on production environment. ## Environment preparation To use logparser and another tools to decompress logs from TStorage we require: * Python3 >= 3.8 * tools repositories on your local machine in /home/tstorage/src/ (one may try to use those tools from different directories, but it requires significant customization of sys.path.appends) (optional) other repositories of TStorage (libtstorage, DTimestamp, USContainer, USConnector) if you want to recreate logs dictionary If you don't already have one ready, you must prepare decompressing dictionary for Logger. To do this just call : /home/tstorage/src/tools/tstools/logparser/createlogdict.py with all other repositories checkouted to their latest versions (to see where they should be, look at the map at: createlogdict.py:57 - createLogDict) After successful completion of logdict creation file logdict.txt should be created at /home/tstorage/logdict.txt. To change this path look at /home/tstorage/src/tools/tstools/logparser/logparserconfig.py. ### Example of logdict.txt content: 0:Recs is full 1:->PrincipalDynamic 2:data - killing %ld 3:Trying to read non-existing param - may cause undefined behaviour. 4:Timeout 5:Serialize::put Too little space in buffer 6:Serialize::get Too little data in buffer 7:Serialize::put Payload size %llu exceeded maximum limit 8:Serialize::put::Rec Buffer too small to hold rec size: %llu 9:Initial reservation of recs failed. 10:Invalid record size: %ld 11:Reservation of recs failed. 12:Invalid size of key range: %ld 13:Reservation of ranges failed. 14:Pushing back into full 15:No empty page to reserve 16:RecsGet - limit of bytes crossed ## Standard usage of scripts /home/tstorage/src/tools/tstools/logparser/logparser.py - writes to stdout all the content from a binary file in a text format. Equals to: cat ./file.txt /home/tstorage/src/tools/tstools/logparser/logfollow.py - watches given file via inotify and writes to stdout in a text format all updates made to the file. Equals to: tail -f ./file.txt ## Parser configuration /home/tstorage/src/tools/tstools/logparser/logparserconfig.py - globally shared configuration for all parsing tools. Read comments for more info about what given parameter changes. ## Additional "data" scripts usage In most of those scripts, one can change RESOLUTION, to modify sampling size. RESOLUTION represents single bucket width in seconds. All scripts are in : /home/tstorage/src/tools/scripts/parserScripts/ ./avgGetTime.py - shows a plot of average GET request time, sampled into RESOLUTION size buckets ./hashCounter.py - shows a histogram of total number of hashes given request requires TStorage to count. May require tuning *_PART parameters inside to run for correctly for different instances. ./plotGETPUT.py - plots number of requests currently "in the system" (either started or ended in this period ), both GETs and PUTs ./plotRanges.py - plots average size of KeyRange in given dimension, may be used to tune sizes of partitions for given set of system requirements. Pass named argument "normalizer" to function with value equal to partition width, to show data in partition length units ./rangesHistogram.py - draws a histogram of range sizes in given dimension, normalised to given partition size. Very helpful in tuning partition sizes for given set of requests.