Watch out if you are using libraries and code from public repositories. Supply chain attacks are (have been) on the rise.
The latest one is on Rust.
Welcome to tinman alley!
Watch out if you are using libraries and code from public repositories. Supply chain attacks are (have been) on the rise.
The latest one is on Rust.
I was cleaning my collection of documents, software…. ok, boxes and boxes of books, manuals, floppies, QIC tapes, DAT tapes and 8mm tapes….
When I found a box of
and here is what’s inside the box.
Wow, that brings back memories.
Watch a video on the history of the Internet Archive.
Source code and creator payouts part of massive hack.
https://www.theverge.com/2021/10/6/22712250/twitch-hack-leak-data-streamer-revenue-steam-competitor
In the world of Ops, it’s always good to learn from mistakes. It’s not good enough that we solved a problem (*fix*), but we must also do a post-mortem to understand what went wrong (*root cause*), what can we do to prevent it in the future (*long term solution*).
I am of the opinion that long term solutions are preferred to short term fixes (hacks!). But long term solutions are not easy, they almost always require understanding the root cause, and that is not always obvious.
After any incidents, crisis, problems, whatever you want to call it, make sure you have a *blame-free* post-mortem. This is very important. We are not looking to blame anyone, we should be focusing on the root cause and how can it be prevented from happening again. Going into a post-mortem with the right mind-set also help make the process go much smoother. You will get better cooperation from involved parties. It’s a team effort, to improve everyone’s job.
The process should be something like this.
It’s good if we can learn from past mistakes. It is even better if we can learn from others’ mistakes!
Here is the start of a list of Operational mistakes published on the web. I will be adding more as I find them. Feel free to submit any that I missed. Thanks!
Very nice post from David Henke:
USB Ethernet Adapters for TiVo
Here is a collected list of USB adapters I got from http://www.tivoco
mmunity.com/tivo-vb/showthread.php?s=&threadid=54620&pagenumber=3
I bought a cheap one (Farallon USB1.1 to ethernet) for $13 from Computer
Geek, and it worked great. Just plug-n-play 🙂
09/11/2005Got word from Antonio Carlos that a Linksys USB200M
works great.
06/17/2004 I’ve received feedback from Rob Clark
that a D-Link DSB-H3ETX (USB to enet adapter) also work.
He bought his locally for $15 and the link he sent is http://support.dlink.com/products/v
iew.asp?productid=DSB%2DH3ETX.
Basically any USB-enet adapters that uses the Pegasus chipset should
work with Tivo as Linux has driver support for that chip.
3COM 3Com USB Ethernet 3C460B ABOCOM USB 10/100 Fast Ethernet USB HPNA/Ethernet ACCTON Accton USB 10/100 Ethernet Adapter SpeedStream USB 10/100 Ethernet ADMTEK ADMtek ADM8511 Pegasus II USB Ethernet ADMtek AN986 Pegasus USB Ethernet (eval. board) ALLIEDTEL Allied Telesyn Int. AT-USB100 BELKIN Belkin F5D5050 USB Ethernet BILLIONTON Billionton USB-100 Billionton USBE-100 Billionton USBEL-100 Billionton USBLP-100 COMPAQ iPAQ Networking 10/100 USB COREGA Corega FEter USB-TX DLINK D-Link DSB-650 D-Link DSB-650TX D-Link DSB-650TX(PNA) ELSA Elsa Micolink USB2Ethernet HAWKING Hawking UF100 10/100 Ethernet IODATA IO DATA USB ET/TX IO DATA USB ET/TX-S KINGSTON Kingston KNU101TX Ethernet LANEED LANEED USB Ethernet LD-USB/T LANEED USB Ethernet LD-USB/TX LINKSYS Linksys USB100TX Linksys USB10TX Linksys USB Ethernet Adapter Linksys USB USB10TX Linksys USB100M Linksys USB200M MELCO MELCO/BUFFALO LUA2-TX MELCO/BUFFALO LUA-TX SIEMENS SpeedStream USB 10/100 Ethernet SMARTBRIDGES smartNIC 2 PnP Adapter SMC SMC 202 USB Ethernet SOHOWARE SOHOware NUB100 Ethernet
Last Updated: $Date: 2003/08/19 04:32:49 $
Big celebration for LinkedIn as the company hits 500M + members.
This picture was at LinkedIn HQ in Sunnyvale. Â I am the guy in the middle of that red circle.
The picture was taken by a mavic pro drone, flying above the building. Â The drone belongs to one of my colleague.
I use tribe nodes quite a lot at $work. It’s how we federate disparate ELK clusters and able to search across them. There are many reasons to have distinct ELK clusters in each data center and/or region.
Some of these are:
1. Elasticsearch does not work well when there is network latencies, which is guaranteed when your nodes are located geographically distant places. You could spend a lot of money to get fast network connection, or you can just have only local clusters. (Me? I pick saving money and avoiding head aches :-)).
2. It can get insanely expensive to create an ES cluster that span data centers/regions. The network bandwidth requirement, the data charges, the care and feeding of such a latency sensitive cluster…. OMG!
3. I don’t really think a 3rd reason is needed.
Although tribe nodes are great for federating ES clusters, there are some quirks in setting them up and caring for them (not as bad as ES clusters that span datacenter though).
One big gotcha for many people who are setting up tribe nodes for the first time is that tribe node can not create index. Tribe can only update, modify an existing index. What this mean is that if you point Kibana at a tribe node, you must first make sure you Kibana index is already created in one of the downstream ES cluster. Otherwise, you will have to create it yourself.
Otherwise, the first time you create an index pattern and tried to save it, you will get an error similar to the subject of this post.
MasterNotDiscoveredException
The error message is wrong and misleading. It has nothing to do with Master node. It has everything to do with tribe node not able to create (PUT) a Kibana index.
Personally, I prefer to make the Kibana index that I use with tribe to have its own unique name. So I run a dedicated Kibana instance pointing to the dedicated tribe (client) node.
Here are the steps I do to get a tribe node and its associated Kibana ready for use.
1. Configure the tribe node to know all the ES clusters I want to federate data from.
tribe.elasticsearch.yml:
cluster.name: toplevel_tribe node.name: ${HOSTNAME} node.master: false node.data: false tribe: DC1-appservice: cluster.name: logging-DC1 Â network.host: 0.0.0.0 network.publish_host: ${HOSTNAME} discovery.zen.ping.unicast.hosts: - dc1-app13225.prod.example.com - dc1-app13226.prod.example.com - dc1-app13227.prod.example.com DC2-appservice: cluster.name: logging-DC2 Â network.host: 0.0.0.0 network.publish_host: ${HOSTNAME} discovery.zen.ping.unicast.hosts: - dc2-app12281.prod.example.com - dc2-app12282.prod.example.com - dc2-app12283.prod.example.com DC3.....etc to DCNN my-es-dedicated-config-cluster: cluster.name: es-config-CORP Â network.host: 0.0.0.0 network.publish_host: ${HOSTNAME} discovery.zen.ping.unicast.hosts: - corp-app1234.example.com on_conflict: prefer_my-es-dedicated-config-cluster 2. Now pre-create the Kibana index in my-ES-dedicated-config-cluster. This is a small cluster in my admin/corp data center that is only for housing configurations, Kibana dashboards, etc.
3. A simpler and more correct way is to temporary point Kibana to the dedicated ES cluster (instead of the tribe).
Do this via this setting in your kibana.yml file:
# The Elasticsearch instance to use for all your queries.
elasticsearch.url: “http://ES-node:9200”
Start Kibana, let it create the index. Â Then stop it, change the setting back to point to your tribe node.
Doing it this way ensure that your kibana is correct.
curl command for pre-creating kibana (3 and 4) index:
curl -s -XPUT "http://localhost:9200/kibana3-int/" -d '{ "settings" : { "number_of_shards" : 3, "number_of_replicas" : 2 },
"mappings" : { "temp" : { "properties" : { "dashboard" : { "type" : "string" }, "group" : { "type" : "string" }, "title" : { "type" : "string" }, "user" : { "type" : "string" } } }, "dashboard" : { "properties" : { "dashboard" : { "type" : "string" }, "group" : { "type" : "string" }, "title" : { "type" : "string" }, "user" : { "type" : "string" } } } }'
# Kibana4
curl -s -XPUT "http://localhost:9200/TRIBENAME-kibana4" -d '{ "index.mapper.dynamic" : true, "settings" : { "number_of_shards" : 1, "number_of_replicas" : 0 },"mappings" : {"search" : {"_timestamp" : { },"properties" : {"columns" : {"type" : "string"},"description" : {"type" : "string"},"hits" : {"type" : "long"},"kibanaSavedObjectMeta" : {"properties" : {"searchSourceJSON" : {"type" : "string"}}},"sort" : {"type" : "string"},"title" : {"type" : "string"},"version" : {"type" : "long"}}},"dashboard" : {"_timestamp" : { },"properties" : {"description" : {"type" : "string"},"hits" : {"type" : "long"},"kibanaSavedObjectMeta" : {"properties" : {"searchSourceJSON" : {"type" : "string"}}},"optionsJSON" : {"type" : "string"},"panelsJSON" : {"type" : "string"},"timeRestore" : {"type" : "boolean"},"title" : {"type" : "string"},"uiStateJSON" : {"type" : "string"},"version" : {"type" : "long"}}},"visualization" : {"_timestamp" : { },"properties" : {"description" : {"type" : "string"},"kibanaSavedObjectMeta" : {"properties" : {"searchSourceJSON" : {"type" : "string"}}},"savedSearchId" : {"type" : "string"},"title" : {"type" : "string"},"uiStateJSON" : {"type" : "string"},"version" : {"type" : "long"},"visState" : {"type" : "string"}}},"config" : {"_timestamp" : { },"properties" : {"buildNum" : {"type" : "long"},"defaultIndex" : {"type" : "string"}}},"index-pattern" : {"_timestamp" : { },"properties" : {"customFormats" : {"type" : "string"},"fieldFormatMap" : {"type" : "string"},"fields" : {"type" : "string"},"intervalName" : {"type" : "string"},"timeFieldName" : {"type" : "string"},"title" : {"type" : "string"}}}}}'
You must be logged in to post a comment.