Hypocrite Elon Musk has finally banned all Jet tracker accounts on Twitter, despite claiming that he supports freedom of expressions. Of course, the hypocrite only supports expression about things he agree with.
This is a cookbook style on how to set a limit (ulimit style) on your custom services that is managed by systemd.
Usecase
Why would you want to do something like this?
You might be running on a small server (or instance if you are using cloud services) and want to prevent your application from affecting other services sharing that server (think of noisy neighbor problem).
Generally, Linux kernel scheduler does a good job of fairly sharing system resources, but that is assuming you have a well behaved application.
Sometime you want to pack applications tightly and don’t mind less performant applications.
In summary, there are lots of reasons why you might want to tune the resources allocated to your applications.
Luckily, if you are using systemd as the controller (and if you are not, why not?), you can take advantage of its capabilities.
Note:
There are some caveats. You need to be using a fairly recent kernel and Linux distrob, either Ubuntu/Debian or recent CentOS/RedHat/Fedora.
What
I am going to show you how to get cloudquery run under systemd on an Ubuntu 20.04 LTS. The reason that I want to do this is because cloudquery will use as much memory as it can and trigger Linux OOM killer.
How
There are 3 files needed:
/etc/default/cloudquery
This file contains definition for CQ_SERVICE_ACCOUNT_KEY_JSON, the value of which is the json content of your service account key file.
[Unit] Description=Slice that limits memory for all my services
[Slice] # MemoryHigh works only in "unified" cgroups mode, NOT in "hybrid" mode # Must add 'systemd.unified_cgroup_hierarchy=1' to GRUB_CMDLINE_LINUX_DEFAULT # in /etc/default/grub MemoryHigh=10240M # MemoryMax works in "hybrid" cgroups mode, too MemoryMax=10240M
Once you have all 3 files in place and edited the values to match your particular system, you need to tell systemd to check its directory for the new service, by running
systemctl daemon-reload
Once you have done that, you can check to see if systemd see your new service, by running
systemctl list-unit-files|grep query
Smoke Test
Test to see if everything works by starting your service.
systemctl start cloudquery
Check (and debug) the status of your new service via
It will be useful to both those wanting to enter the SRE field or learn more it. I think it is also a nice way to round out the fuzzy areas for practicing SREs. What I mean is that we all have areas we are domain experts in, and areas we can get by, but not comfortable with.
The following section is lifted verbatim from LinkedIn.
There is a vast amount of resources scattered throughout the web on what the roles and responsibilities of SREs are, how to monitor site health, production incidents, define SLO/SLI etc. But there are very few resources out there guiding someone on the basic skill sets one has to acquire as a beginner. Because of the lack of these resources, we felt that individuals have a tough time getting into open positions in the industry. We created the School Of SRE as a starting point for anyone wanting to build their career as an SRE.
In this course, we are focusing on building strong foundational skills. The course is structured in a way to provide more real life examples and how learning each of these topics can play an important role in day to day SRE life. Currently we are covering the following topics under the School Of SRE:
A question that I’ve seen asked many time on the web is people asking how to shorten their Grafana legend/label.
E.g. using hostname as a legend will usually return the full FQDN, which can be too long if you have many hosts and make a mess of your panel.
standard legend using FQDN
Lots of searching show people having similar questions and a number of request for enhancements. In my case, there is a simple solution that work. Here is how.
If you are using sendgrid as a service for your outbound email, you would want to monitor and be able to answer questions such as:
how much email are you sending
status of sent email – success, bounced, delayed, etc.
trends
etc.
We get questions all the time from $WORK customer support folks on whether an email sent to a customer got there (customer claimed they never got it). There could be any number of reasons why customer do not see email sent from us.
our email is filtered into customer spam folder
email is reject/bounced by customer mail service
any number of network/server/services related errors between us and customer mail service
the email address customer provided is invalid (and email bounced)
If we have access to event logs from sendgrid, we would be able to quickly answer these types of questions.
SendGrid’s Event Webhook will notify a URL of your choice via HTTP POST with information about events that occur as SendGrid processes your email. Common uses of this data are to remove unsubscribes, react to spam reports, determine unengaged recipients, identify bounced email addresses, or create advanced analytics of your email program. With Unique Arguments and Category parameters, you can insert dynamic data that will help build a sharp, clear image of your mailings.
Login to your sendgrid account and click on Mail Settings.
Then click on Event Notification
In HTTP Post URL, enter the DNS name of the service endpoint you are going to setup next.
For example, mine is (not a valid endpoint, but close enough): https://sendlog.mydomain.com/logger
You want to install pm2 (nodejs Process Manager version 2). Very nice nodejs process manager.
Next is to edit and configure sendgrid-event-logger (SEL for short). If the default config works for you, then no need to do anything. Check and make sure it is pointing to where your ES host is located (mine is running on the same instance, hence localhost). I also left SEL listening on port 8080 as that is available on this instance.
Make sure Sendgrid Event webhook is turned on and you should be seeing events coming in. Check your Elasticsearch cluster for new indices.
$ curl -s localhost:9200/_cat/indices|grep mail
green open mail-2018.03.31 -g6Tw9b9RfqZnBVYLdrF-g 1 0 2967 0 1.4mb 1.4mb
green open mail-2018.03.28 GxTRx2PgR4yT5kiH0RKXrg 1 0 8673 0 4.2mb 4.2mb
green open mail-2018.04.06 2PO9YV1eS7eevZ1dfFrMGw 1 0 10216 0 4.9mb 4.9mb
green open mail-2018.04.11 _ZINqVPTSwW7b8wSgkTtTA 1 0 8774 0 4.3mb 4.3mb
etc.
Go to Kibana, setup index pattern. In my case, it’s mail-*. Go to Discover, select mail-* index pattern and play around.
Here is my simple report. I see around 9am, something happened to cause a huge spike in mail events.
Next step is for you to create dashboards to fit your needs.
You must be logged in to post a comment.