(original) (raw)
On Thu, Jul 20, 2017 at 10:56 AM Jim J. Jewett <jimjjewett@gmail.com> wrote:
I agree that startup time is a problem, but I wonder if some of the pain could be mitigated by using a persistent process.
This is one strategy that works under some situations, but not all.
There are downsides to daemons:
\* They only work on one machine. New instances being launched in a cloud (think kubernetes jobs, app engine workers, etc) cannot benefit.
\* A daemon that forks off new workers can lose the benefit of hash randomization as tons of processes at once share the same seed. Mitigation for this is possible by regularly relaunching new replacement daemons but that complicates the already complicated.
\* Correctly launching and managing a daemon process is hard. Even once you have done so, you now have a interprocess concurrency and synchronization issues.
For example, in https://mail.python.org/pipermail/python-dev/2017-July/148664.html Ben Hoyt mentions that the Google Cloud SDK (CLI) team has found it "especially problematic for shell tab completion helpers, because every time you press tab the shell has to load your Python program"
I can imagine a daemon working well in this specific example.
Is it too hard to create a daemon server?
That is my take on it.
Is the communication and context switch slower than a new startup?Is the pattern just not well-enough advertised?
I have experienced good daemon processes. Bazel (a Java based build system) uses that approach.
I can imagine Mercurial being able to do so as well but have no idea if they've looked into it or not.
Daemons are by their nature an application specific thing.
-gps