Sto>Īug 28 07:59:54 madhatter systemd: rvice: Failed with result 'timeout'.This reads just like a cold email. The timeout will also be part of the service log, e.g: $ journalctl -user -u exampleserviceĪug 28 07:59:44 madhatter systemd: Started rvice.Īug 28 07:59:54 madhatter systemd: rvice: Service reached runtime time limit. ĮxecStopPost=/bin/bash /script-to-check-and-notify-me.sh If your service is killed because it exceeded RuntimeMaxSec, $SERVICE_RESULT will be timeout, so you could check this and perform some monitoring action (send an email, play a sound, whatever). These scripts have access to a number of environment variables, including $SERVICE_RESULT. With respect to monitoring your services, you could probably handle that with an ExecStopPost script in your service unit.
If I want to run the service on demand, I can systemctl -user start myservice. If I want to stop the service from running periodically, I can systemctl -user stop myservice.timer (to stop the timer until next reboot), or systemctl -user disable -now myservice.timer (to both stop the timer and prevent it from starting again after the next reboot). If I need to stop a service prematurely, I can simply run systemctl -user stop myservice no need to look things up with ps. Rather than using timeout, I'm letting systemd manage the maximum service runtime. There are a variety of additional flags I can pass to journalctl to filter messages by time, content, etc. I can rely on systemd to collect the logs, which I can see by running journalctl -user -u myservice. I would create a service in ~/.config/systemd/user/rvice: Īnd then a timer in ~/.config/systemd/user/myservice.timer: Īnd then: $ systemctl -user enable myservice.timer This makes the process of managing them much easier I don't need to use ps to look up processes to see what to PID to kill.Į.g, instead of: 42 01 * * * mkdir -p '/tmp-data/logs/specific-script/' & timeout 60m bash /script.sh > /tmp-data/logs/specific-script-$(date. I've moved most of my cron jobs into systemd timer units. Not looking for code really, any direction even as to how others "monitor" their jobs and keep tabs on how long they are running, executing commands if they run slow/long and killing ones that run too long and getting notified it was killed. I get three entries per CRON job it seems, and of course all 3 don't need to be killed, so I wouldn't even know how to find the "one" that does. Using something like ps -e -o pid,etime,etimes,cmd | grep 'ash' I did try various things with ps but unless I'm doing something extremely weird with my jobs, I get 3 results for every script that's executed.Ĥ2 01 * * * mkdir -p '/tmp-data/logs/specific-script/' & timeout 60m bash /script.sh > /tmp-data/logs/specific-script-$(date. I don't need to know how to notify me or anything I can handle that, but I am looking for input as to "how" I can monitor the various scripts notifying me every X minutes (different for different scripts), as well as notifying me when the script would be killed even. I'm looking for a way to notify me, as an example, every 10 minutes of execution (a running timer would be needed of course). I do log the job and output to log files when certain parts of the scripts are executed, but I'm looking to get a little more verbose as to what's going on with my scripts and jobs.
Terminate the script after 60 minutes, as it should only take 5 minutes to run, etc. I run numerous scripts via CRON throughout the day, and as they rely on third-party providers which sometimes are slow, I currently run them like so timeout 60m bash /script.sh and that's worked great for me.