每天推薦一個 GitHub 優質開源項目和一篇精選英文科技或編程文章原文,歡迎關注開源日報。交流QQ群:202790710;微博:https://weibo.com/openingsource;電報群 https://t.me/OpeningSourceOrg

2018年11月24日:開源日報第261期

今日推薦開源項目:《掘金翻譯計劃 gold-miner》傳送門:GitHub鏈接

推薦理由:掘金翻譯計劃是一個專門翻譯掘金網上英文文章的社區,TensorFlow 的官中文檔就是他們翻譯的,除此之外,還包括諸如人工智慧和區塊鏈這樣前沿的技術以及前後端這樣泛用的技術,在需要學習這些技術的時候來這個社區尋找合適的文章也是一個選擇。當然了,他們也歡迎新譯者的加入和新文章的推薦。


今日推薦英文原文:《How to save hours of debugging with logs》作者:Maya Gilad

原文鏈接:https://medium.freecodecamp.org/how-to-save-hours-of-debugging-with-logs-6989cc533370

推薦理由:作者在接觸日誌時獲得的一些經驗,興許這會在需要調查日誌來修復錯誤的時候幫上忙

How to save hours of debugging with logs

2018年11月24日:開源日報第261期

A good logging mechanism helps us in our time of need.

When we』re handling a production failure or trying to understand an unexpected response, logs can be our best friend or our worst enemy.

Their importance for our ability to handle failures is enormous. When it comes to our day to day work, when we design our new production service/feature, we sometimes overlook their importance. We neglect to give them proper attention.

When I started developing, I made a few logging mistakes that cost me many sleepless nights. Now, I know better, and I can share with you a few practices I』ve learned over the years.

Not enough disk space

When developing on our local machine, we usually don』t mind using a file handler for logging. Our local disk is quite large and the amount of log entries being written is very small.

That is not the case in our production machines. Their local disk usually has limited free disk space. In time the disk space won』t be able to store log entries of a production service. Therefore, using a file handler will eventually result in losing all new log entries.

If you want your logs to be available on the service』s local disk, don』t forget to use a rotating file handler. This can limit the max space that your logs will consume. The rotating file handler will handle overriding old log entries to make space for new ones.

Eeny, meeny, miny, moe

2018年11月24日:開源日報第261期

Our production service is usually spread across multiple machines. Searching a specific log entry will require investigating all them. When we』re in a hurry to fix our service, there』s no time to waste on trying to figure out where exactly did the error occur.

Instead of saving logs on local disk, stream them into a centralized logging system. This allows you to search all them at the same time.

If you』re using AWS or GCP — you can use their logging agent. The agent will take care of streaming the logs into their logging search engine.

To log or not log? this is the question…

2018年11月24日:開源日報第261期

There is a thin line between too few and too many logs. In my opinion, log entries should be meaningful and only serve the purpose of investigating issues on our production environment. When you』re about to add a new log entry, you should think about how you will use it in the future. Try to answer this question: What information does the log message provide the developer who will read it?

Too many times I see logs being used for user analysis. Yes, it is much easier to write 「user watermelon2018 has clicked the button」 to a log entry than to develop a new events infrastructure. This is not the what logs are meant for (and parsing log entries is not fun either, so extracting insights will take time).

A needle in a haystack

In the following screenshot we see three requests which were processed by our service.

How long did it take to process the second request? Is it 1ms, 4ms or 6ms?

Since we don』t have any additional information on each log entry, we cannot be sure which is the correct answer. Having the request id in each log entry could have reduced the number of possible answers to one. Moreover, having metadata inside each log entry can help us filter the logs and focus on the relevant entries.

Let』s add some metadata to our log entry:

The metadata is placed as part of the free text section of the entry. Therefore, each developer can enforce his/her own standards and style. This will result in a complicated search.

Our metadata should be defined as part of the entry』s fixed structure.

Each message in the log was pushed aside by our metadata. Since we read from left to right, we should place the message as close as possible to the beginning of the line. In addition, placing the message in the beginning 「breaks」 the line』s structure. This helps us with identifying the message faster.

Placing the timestamp and log level prior to the message can assist us in understanding the flow of events. The rest of the metadata is mainly used for filtering. At this stage it is no longer necessary and can be placed at the end of the line.

An error which is logged under INFO will be lost between all normal log entries. Using the entire range of logging levels (ERROR, DEBUG, etc.) can reduce search time significantly. If you want to read more about log levels, you can continue reading here.

Logs analysis

Searching files for log entries is a long and frustrating process. It usually requires us to process very large files and sometimes even to use regular expressions.

Nowadays, we can take advantage of fast search engines such as Elastic Search and index our log entries in it. Using ELK stack will also provide you the ability to analyze your logs and answer questions such as:

  1. Is the error localized to one machine? or does it occur in all the environment?
  2. When did the error started? What is the error』s occurrence rate?

Being able to perform aggregations on log entries can provide hints for possible failure』s causes that will not be noticed just by reading a few log entries.

In conclusion, do not take logging for granted. On each new feature you develop, think about your future self and which log entry will help you and which will just distract you.

Remember: your logs will help you solve production issues only if you let them.


每天推薦一個 GitHub 優質開源項目和一篇精選英文科技或編程文章原文,歡迎關注開源日報。交流QQ群:202790710;微博:https://weibo.com/openingsource;電報群 https://t.me/OpeningSourceOrg