今日推薦開源項目:《插件英雄榜 ChromeAppHeroes》
今日推薦英文原文:《How Do You Quality》
今日推薦開源項目:《插件英雄榜 ChromeAppHeroes》傳送門:GitHub鏈接
推薦理由:對 Chrome 上好的插件進行推薦的英雄榜。顧名思義,這個項目中收錄了一些在 Chrome 中好用的插件,也提供了從 Chrome 插件商店中獲取插件的方法,對於經常使用 Chrome 的朋友來說可以看看有沒有什麼好用的工具可以添加一下了,而對於使用 firefox 的朋友來說……興許現在開始就可以為了新項目火狐插件英雄榜著手調查一下好的插件了。
今日推薦英文原文:《How Do You Quality》作者:Joe Castro
原文鏈接:https://medium.com/@joecastr/how-do-you-quality-e2518f865e4
推薦理由:作者認為的衡量自己工作質量的標準
How Do You Quality
A question I ask in interviews is 「How do you verify the quality of your work?」 It』s really not intended as a loaded question. While there are plenty of bad answers I don』t think there』s a singularly correct one. When I』ve heard others ask the same thing, though, I know they were waiting to check the boxes from hearing 「unit tests」, 「integration tests」, 「code reviews」, etc. It is a question that lends itself to a buzzword bingo card.I should write a separate post about how and why I conduct interviews the way I do, but overall I』m treating it as a conversation and there』s give and take on both sides. What have you done in the past?… What did you find worked well?… What would you do differently now?… When I』ve done that before here』s what I saw… Does that match your experience, or did you mitigate it somehow?… Ultimately anyone who works professionally as a software developer is solving identified business problems and the go to tool in their belt is code. And once you separate it from any kind of product design that code either solves the problem well or it doesn』t. The only metric that matters is 「Will the business still be around tomorrow?」 Crappy quality in code can drag down that metric, but not always in obvious ways.
There』s a variation of the original question that I』ve been asked when I』m on the other side of the interview table, 「How do you define good code?」 I already alluded to my answer to that, but it』s worth listing out qualities that other professional, employed engineers may give to it.
Good code has:
- No bugs
- Code reviews
- Lots of comments
- Lots of unit tests
- High code coverage from tests
- Verbose logging
- Low cyclomatic complexity
- Preference for interfaces over classes
- A consistent style that follows the team』s guidelines
- Source control
- Dependency injection
- Good performance
- Good scalability
- Ease of understanding
- The ability to adapt readily and with high confidence to changing business requirements
Why do developers give those answers to that question? Those are the kind of things that are measurable and within one』s control. Is my code high quality? Of course! I have 10,000 unit tests! Any of those answers can also be a crutch, because while they』re potentially useful they』re not alone sufficient. And they』re not free. If those 10,000 tests never catch a bug before a customer sees it then they weren』t actually providing any value. They were costing development time to write them and run them, even if that run time cost is paid by build machines it still takes that much longer to get the code off the developer』s machine and on its way to production. Those tests are also friction for making changes, and they have to be maintained just like any customer facing code. Most of the things in that list directly affect a developer』s quality of life. They』re going to be working in other people』s changes and debugging and trying to shape it to do new things while still doing old things, and for the most part the nameless developer in this scenario wants to quickly figure out the right place to do something, get it done, move past it, and not have to revisit it later because it turns out that there were some unintended consequences to their work that got discovered in production. That』s good enough motivation and that can be a fine list for those goals. If people are dealing with an inconsistent and unpredictable code base then it』s going to lead to frustration and ultimately attrition (one of the non-obvious ways bad code has an impact). Get enough attrition and it』s going to directly affect the business metric.
So with all that as preamble, here are some of my controversial opinions about code quality that are either more nuanced than modern conventional wisdom, or run completely counter to it. They』re all formed based on experience rather than ideologies.
Tests have a very specific and limited usefulness. Generally it』s better to err on the side of fewer tests
Unit tests have costs: time to write, time to run, time to maintain. Unit tests functionally really only do two things: they exercise code, and when coupled with code coverage they can show how to hit error cases in the code. They don』t verify correctness; they make sure the code behaves the way it was written. There will never be unit tests that catch something that wasn』t thought of, and code coverage in these tests is super misleading because of the potential explosion of combinatoric states. Add in mocked dependencies that keep track of how many times a function is called and you just turned your implementation details into part of your interface contract.
Outside of catching egregious errors, unit tests are super valuable for two things:
- They can provide documentation for how an API is intended to be used.
- They』re a line of defense for good API design. If it is hard to write the test demonstrating how to use the API then the API can be improved.
Avoid dependency injection as a pattern. Mocking is an anti-pattern, except in really specific scenarios
DI is a way of making things more testable without thinking about what or why you』re trying to test it. See above. It leads away from principles of loose coupling and tight cohesion, even in the name of promoting it. For everything that can be injected that becomes a concrete part of the contract for the code that it will be implemented in terms of those services. Libraries that make mocking objects easy reinforce that behavior. Rather than using parameters to drive configurable behavior it becomes easy to write tests that basically treat implementation details as configuration objects. It becomes very difficult to reason about the true production state of those dependencies and how they』re being provided and interacting with each other.
The three scenarios where it makes sense to have mocking are storage, network, and time services. Unless you』re working on an actual storage, network, or time library, those are resources that are entirely outside the domain of most code』s business logic and in as much as there are tests, mocking them gives the ability to have more reliable and repeatable success and failure verification.
Behavioral metrics are superlative
It is super important to understand how things are running in production. Whereas tests can only tell you answers to questions someone thought to ask, metrics can yield answers that beg new questions. Build dashboards based on important behavioral metrics. Build themed dashboards for different aspects of different features. Operational run books should have links to these dashboards. Whenever someone writes a new feature there should be metrics in place, and a sense of what those metrics will look like if things are going right and also if they』re going wrong.
Focus on behavioral metrics. Latency and counters are easy things to record and match a developer』s mind set in the code. Avoid falling back on those being the core metrics emitted. This is the data-science version of 「the tests verify that the code runs the way I wrote it.」 Emit metrics based on external, ideally the way the customer experiences it, observations. Don』t record something just because it was already conveniently expressed as a number.
Instrument competing indicators. In as much as metrics can be used to measure success of a feature they can also be gamed. This won』t usually be malicious, but it』s the nature of measuring anything. If you』re trying to reduce the latency on an API that returns a list of the top 20 items for X a cheap way to reduce it is to change that 20 to 10. Is that the right/wrong thing to do? I don』t know. It shouldn』t be done arbitrarily and there really had better be something else being measured that indicates the holistic impact of any trade off being made.
Tie alarms and deployment to metrics monitoring. If a code change causes metrics to move in an unexpected way the deployment should be automatically stopped. A key part of this is that it is automated. Don』t expect to be watching every dashboard with thorough diligence for every deployment. There should be so many metrics and so many alarms for any application of reasonable scale that it』s infeasible to manually watch them all.
This mostly applies to server software, but more and more everything is effectively moving this way. This is equally applicable to client-side telemetry.
Now, tying this back into the original thesis: Do these suggestions help improve the macro business metric?
Maybe. In my experience, I think so. They can certainly be abused by following the letter and not the spirit. I』d hate for someone to read this and use it as an argument for not writing tests and then not doing anything else to replace those tests they didn』t write. There are additional things I have opinions on about actual code level constructs and architecture, but since those aren』t really about keeping the business alive they』re not pertinent to this write up.
These opinions are my own and I reserve the right to get smarter, later. Let me know if any of this rings true or especially if I seem to have completely missed the mark on something.
下載開源日報APP:https://openingsource.org/2579/
加入我們:https://openingsource.org/about/join/
關注我們:https://openingsource.org/about/love/