Elon Musk jet tracker banned from Twitter

Hypocrite Elon Musk has finally banned all Jet tracker accounts on Twitter, despite claiming that he supports freedom of expressions. Of course, the hypocrite only supports expression about things he agree with.

In any case, Jet tracker has moved to Maston. Here https://mastodon.social/@elonjet

Original article here: https://techcrunch.com/2022/12/14/elon-jet-the-twitter-account-tracking-elon-musks-flights-was-permanently-suspended/

How To set limit on systemd services

This is a cookbook style on how to set a limit (ulimit style) on your custom services that is managed by systemd.


Why would you want to do something like this?

You might be running on a small server (or instance if you are using cloud services) and want to prevent your application from affecting other services sharing that server (think of noisy neighbor problem).

Generally, Linux kernel scheduler does a good job of fairly sharing system resources, but that is assuming you have a well behaved application.

Sometime you want to pack applications tightly and don’t mind less performant applications.

In summary, there are lots of reasons why you might want to tune the resources allocated to your applications.

Luckily, if you are using systemd as the controller (and if you are not, why not?), you can take advantage of its capabilities.


There are some caveats. You need to be using a fairly recent kernel and Linux distrob, either Ubuntu/Debian or recent CentOS/RedHat/Fedora.


I am going to show you how to get cloudquery run under systemd on an Ubuntu 20.04 LTS. The reason that I want to do this is because cloudquery will use as much memory as it can and trigger Linux OOM killer.


There are 3 files needed:

  • /etc/default/cloudquery
    • This file contains definition for CQ_SERVICE_ACCOUNT_KEY_JSON, the value of which is the json content of your service account key file.
    • Example:
      • CQ_SERVICE_ACCOUNT_KEY_JSON='{ "type": "service_account", "project_id": "foobar", "private_key_id": "1a23b456cd134", "private_key": "-----BEGIN PRIVATE KEY-----\n.....vA8r\n-----END PRIVATE KEY-----\n", "client_email": "[email protected]", "client_id": "1234567890", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/foobar-sa%40foobar.iam.gserviceaccount.com" }'

  • /lib/systemd/system/cloudquery_limit.slice
    • [Unit]
      Description=Slice that limits memory for all my services

      # MemoryHigh works only in "unified" cgroups mode, NOT in "hybrid" mode
      # Must add 'systemd.unified_cgroup_hierarchy=1' to GRUB_CMDLINE_LINUX_DEFAULT
      # in /etc/default/grub
      # MemoryMax works in "hybrid" cgroups mode, too

  • /etc/systemd/system/cloudquery.service
    • [Unit]
      Description=Cloud Query

      ExecStart=/usr/local/bin/cloudquery --config /data/cq/config.hcl fetch
      ExecReload=/bin/kill -HUP $MAINPID


Once you have all 3 files in place and edited the values to match your particular system, you need to tell systemd to check its directory for the new service, by running

systemctl daemon-reload

Once you have done that, you can check to see if systemd see your new service, by running

systemctl list-unit-files|grep query

Smoke Test

Test to see if everything works by starting your service.

systemctl start cloudquery

Check (and debug) the status of your new service via

systemctl status cloudquery

and journalctl -xe

Thanks to the posts from https://unix.stackexchange.com/questions/436791/limit-total-memory-usage-for-multiple-instances-of-systemd-service for pointing me in the right direction.

NordicTrack exercise equipment tablet

Link to Wiki on Reddit for NordicTrack android tablet.

Link to the manual for the android tablet (made by Walinda OEM)

Site Reliability Engineer Training

LinkedIn has open sourced their SRE online training materials. This is a wonderful gesture from LinkedIn.

It will be useful to both those wanting to enter the SRE field or learn more it. I think it is also a nice way to round out the fuzzy areas for practicing SREs. What I mean is that we all have areas we are domain experts in, and areas we can get by, but not comfortable with.

The following section is lifted verbatim from LinkedIn.

There is a vast amount of resources scattered throughout the web on what the roles and responsibilities of SREs are, how to monitor site health, production incidents, define SLO/SLI etc. But there are very few resources out there guiding someone on the basic skill sets one has to acquire as a beginner. Because of the lack of these resources, we felt that individuals have a tough time getting into open positions in the industry. We created the School Of SRE as a starting point for anyone wanting to build their career as an SRE.

In this course, we are focusing on building strong foundational skills. The course is structured in a way to provide more real life examples and how learning each of these topics can play an important role in day to day SRE life. Currently we are covering the following topics under the School Of SRE:

HOW-TO customize Grafana legend/label

A question that I’ve seen asked many time on the web is people asking how to shorten their Grafana legend/label.

E.g. using hostname as a legend will usually return the full FQDN, which can be too long if you have many hosts and make a mess of your panel.

standard legend using FQDN

Lots of searching show people having similar questions and a number of request for enhancements. In my case, there is a simple solution that work. Here is how.

Use the function label_replace(). So

rate(nginx_http_requests_total{instance=~"$instance", host="$host"}[5m])

turn into

label_replace(rate(nginx_http_requests_total{instance=~"$instance", host="$host"}[5m]), "hname", "$1", "instance", "(.*).foo.bar.local")


And the legend format changed from

{{instance}}-{{status}}  to {{hname}}-{{status}}
Shorter legend

Differences between API Gateway and Service Mesh

Enjoyed reading this post


Intuitively, I knew they are different, but could not explain it as clearly as the above post.


Monitoring sendgrid with Elasticsearch

If you are using sendgrid as a service for your outbound email, you would want to monitor and be able to answer questions such as:

  • how much email are you sending
  • status of sent email – success, bounced, delayed, etc.
  • trends
  • etc.

We get questions all the time from $WORK customer support folks on whether an email sent to a customer got there (customer claimed they never got it).   There could be any number of reasons why customer do not see email sent from us.

  • our email is filtered into customer spam folder
  • email is reject/bounced by customer mail service
  • any number of network/server/services related errors between us and customer mail service
  • the email address customer provided is invalid (and email bounced)

If we have access to event logs from sendgrid, we would be able to quickly answer these types of questions.

Luckily sendgrid offers Event Webhook.

Verbatim quote from above link.

SendGrid’s Event Webhook will notify a URL of your choice via HTTP POST with information about events that occur as SendGrid processes your email. Common uses of this data are to remove unsubscribes, react to spam reports, determine unengaged recipients, identify bounced email addresses, or create advanced analytics of your email program. With Unique Arguments and Category parameters, you can insert dynamic data that will help build a sharp, clear image of your mailings.

Login to your sendgrid account and click on Mail Settings.

Then click on Event Notification


In HTTP Post URL, enter the DNS name of the service endpoint you are going to setup next.

For example, mine is (not a valid endpoint, but close enough): https://sendlog.mydomain.com/logger

Since I do not believe in re-inventing the wheel, Adly Abdullah has already written a simple sendgrid event listener (Note: this is my forked version, which works with ES 6.x).   This is a nodejs service.  You can install it via npm.

$ sudo npm install -g sendgrid-event-logger pm2

You want to install pm2 (nodejs Process Manager version 2).  Very nice nodejs process manager.

Next is to edit and configure sendgrid-event-logger (SEL for short).   If the default config works for you, then no need to do anything.  Check and make sure it is pointing to where your ES host is located (mine is running on the same instance, hence localhost).   I also left SEL listening on port 8080 as that is available on this instance.

$ cat /etc/sendgrid-event-logger.json
    "elasticsearch_host": "localhost:9200",
    "port": 8080,
    "use_basicauth": true,
    "basicauth": {
    "user": "sendgridlogger",
    "password": "KLJSDG(#@%@!gBigSecret"
"use_https": false,
    "https": {
        "key_file": "",
        "cert_file": ""
    "days_to_retain_log": 365

NOTE: I have use_https set to false because my nginx front-end is already using https.

Since SEL is listening on port 8080, you can run it as yourself.

$ pm2 start sendgrid-event-logger -i 0 --name "sendgrid-event-logger"

Verify that SEL is running.

$ pm2 show 0

Describing process with id 0 - name sendgrid-event-logger
│ status            │ online                                               │
│ name              │ sendgrid-event-logger                                │
│ restarts          │ 0                                                    │
│ uptime            │ 11m                                                  │
│ script path       │ /usr/bin/sendgrid-event-logger                       │
│ script args       │ N/A                                                  │
│ error log path    │ $HOME/.pm2/logs/sendgrid-event-logger-error-0.log    │
│ out log path      │ $HOME/.pm2/logs/sendgrid-event-logger-out-0.log      │
│ pid path          │ $HOME/.pm2/pids/sendgrid-event-logger-0.pid          │
│ interpreter       │ node                                                 │
│ interpreter args  │ N/A                                                  │
│ script id         │ 0                                                    │
│ exec cwd          │ $HOME                                                │
│ exec mode         │ fork_mode                                            │
│ node.js version   │ 8.11.1                                               │
│ watch & reload .  │ ✘                                                    │
│ unstable restarts │ 0                                                    │
│ created at        │ 2018-02-14T23:36:06.705Z                             │
Code metrics value
│ Loop delay .    │ 0.68ms │
│ Active requests │ 0      │
│ Active handles  │ 4      │

I use nginx and here is my nginx config for SEL.

/etc/nginx/sites-available $ cat sendgrid-logger
upstream sendgrid_logger {

server {
  server_name slog.mysite.org slog;
  listen 443 ssl ;

  include snippets/ssl.conf;
  access_log /var/log/nginx/slog/access.log;
  error_log /var/log/nginx/slog/error.log;
  proxy_connect_timeout 5m;
  proxy_send_timeout 5m;
  proxy_read_timeout 5m;

  location / {
    proxy_pass http://sendgrid_logger;
$ sudo ln -s /etc/nginx/sites-available/sendgrid-logger /etc/nginx/sites-enabled/
$ sudo systemctl reload nginx

Make sure Sendgrid Event webhook is turned on and you should be seeing events coming in.   Check your Elasticsearch cluster for new indices.

$ curl -s localhost:9200/_cat/indices|grep mail
green open mail-2018.03.31 -g6Tw9b9RfqZnBVYLdrF-g 1 0 2967 0 1.4mb 1.4mb
green open mail-2018.03.28 GxTRx2PgR4yT5kiH0RKXrg 1 0 8673 0 4.2mb 4.2mb
green open mail-2018.04.06 2PO9YV1eS7eevZ1dfFrMGw 1 0 10216 0 4.9mb 4.9mb
green open mail-2018.04.11 _ZINqVPTSwW7b8wSgkTtTA 1 0 8774 0 4.3mb 4.3mb


Go to Kibana, setup index pattern.  In my case, it’s mail-*.  Go to Discover, select mail-* index pattern and play around.

Here is my simple report.  I see around 9am, something happened to cause a huge spike in mail events.


Next step is for you to create dashboards to fit your needs.



%d bloggers like this: