<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[:wq please]]></title><description><![CDATA[Don't forget to add "please", otherwise it might not work. I write about the technical part of my life as a full-time developer, sharing my thoughts and experience.]]></description><link>https://wqplease.com</link><generator>Substack</generator><lastBuildDate>Tue, 05 May 2026 11:30:53 GMT</lastBuildDate><atom:link href="https://wqplease.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Paweł Zelmański]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[wqplease@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[wqplease@substack.com]]></itunes:email><itunes:name><![CDATA[Paweł Zelmański]]></itunes:name></itunes:owner><itunes:author><![CDATA[Paweł Zelmański]]></itunes:author><googleplay:owner><![CDATA[wqplease@substack.com]]></googleplay:owner><googleplay:email><![CDATA[wqplease@substack.com]]></googleplay:email><googleplay:author><![CDATA[Paweł Zelmański]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Let's talk about code reviews]]></title><description><![CDATA[In this in-depth article, I'm sharing my knowledge and thoughts about the code review process, no matter at which stage of the code review journey you currently are.]]></description><link>https://wqplease.com/p/lets-talk-about-code-reviews</link><guid isPermaLink="false">https://wqplease.com/p/lets-talk-about-code-reviews</guid><dc:creator><![CDATA[Paweł Zelmański]]></dc:creator><pubDate>Sun, 08 Sep 2024 16:22:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!dLKL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77809ab0-1b62-4567-8ba0-7092aa88738f_2048x2048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Do you review your code? Are you forced to do it? Do you feel like it's working well? Or maybe it feels like like you&#8217;re wasting time?</p><p>If you are already doing code reviews, you might not think a lot about it. But what if it's a missed opportunity? What if you could get even more out of your Pull Requests?</p><p><em>Note: In this article, I might use pull request (PR) and code review interchangeably. To be precise, a pull request is a thing you create in some tool, containing some description, diff of changes and can be approved/declined, and code review is the process of reading the code. For the sake of this article, I don't feel like these two need to be differentiated.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dLKL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77809ab0-1b62-4567-8ba0-7092aa88738f_2048x2048.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dLKL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77809ab0-1b62-4567-8ba0-7092aa88738f_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!dLKL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77809ab0-1b62-4567-8ba0-7092aa88738f_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!dLKL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77809ab0-1b62-4567-8ba0-7092aa88738f_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!dLKL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77809ab0-1b62-4567-8ba0-7092aa88738f_2048x2048.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dLKL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77809ab0-1b62-4567-8ba0-7092aa88738f_2048x2048.png" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/77809ab0-1b62-4567-8ba0-7092aa88738f_2048x2048.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5862237,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dLKL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77809ab0-1b62-4567-8ba0-7092aa88738f_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!dLKL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77809ab0-1b62-4567-8ba0-7092aa88738f_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!dLKL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77809ab0-1b62-4567-8ba0-7092aa88738f_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!dLKL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77809ab0-1b62-4567-8ba0-7092aa88738f_2048x2048.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h2><strong>Code review goals</strong></h2><p>Let's start by answering "Why should we even do code reviews on our team?". I think it's not that straightforward to answer, there's no single answer to this question. As usual, it depends and varies from team to team, but here are some things I find beneficial from reviewing the code:</p><h4><strong>Knowledge sharing</strong></h4><p>This is the biggest thing in my opinion. The ability to easily transfer knowledge between team members. And it does not even matter whether they are seniors, juniors, or any mix of these. Knowledge sharing is always happening and is always beneficial. In my case, it happened multiple times that some innocent and small comment on PR started a bigger discussion, sometimes even one that I could call a philosophical one, like the&nbsp;<code>int vs var/let</code>. And I think it's a good thing, since such discussions, if are moderated properly (so we're exchanging our opinions and giving arguments, instead of trying to push as hard as possible onto our favourite method), can lead to the growth of all members within the team. Code reviews are giving opportunities to such discussions, they act as conversation starters. After all, it's not a regular practice (at least not in my case) to approach my coworkers and ask them random questions, like "Hey, how do you usually decide that the function is already too long?" or "Have you learned something new recently you could share with me?".</p><h4><strong>Quality improvement</strong></h4><p>This is another big benefit of code reviews. You always have at least a second pair of eyes, at minimum skimming through the code and checking if you made some of the most obvious mistakes. Depending on the type of change and reviewer, you might get a more detailed review or just a quick look at the code. While I prefer more detailed reviews, they also take more time and energy, so I can understand it's not always possible.</p><p>When I was at the university, I had classes on waterfall methodology. In waterfall, you have multiple separate stages (requirements / design / implementation / ..) and identifying a bug in each next stage increases the cost of fixing a problem by 10x. I'm not sure how true it is, but I believe that something similar happens on pull requests - when you catch a bug in PR instead of in a test environment, you save yourself some time. If we compare it to catching a bug in a production environment, the cost is usually even higher. It does not matter whether we're talking about business logic bugs, or technical ones (like memory leaks), it's always better not to introduce them.</p><h4><strong>Tests</strong></h4><p>Depending on your policy on testing, PR is a good point to ask "Is it possible to add some tests?". I also sometimes sit down and try to understand what the tests are testing and whether there are some problems in the tests themselves, or in test coverage.</p><h4><strong>Keeping up with changes</strong></h4><p>If the project is big enough, everyone can't know what's happening in every module. Code reviews are a cheap way of keeping either everyone or some subset of developers up to date with the codebase, without investing too much time into it. They don't have to do the implementation, all they need to do is to spend significantly less time from time to time checking out the PRs.</p><h2><strong>What should not be a part of a code review</strong></h2><p>There are also some things that PRs and code reviews are not about. These are things like:</p><ul><li><p>Code style, formatting, newlines, etc.</p></li><li><p>Build checks (reviewer making sure that app builds)</p></li><li><p>Test checks (reviewer running the tests on their machine)</p></li></ul><p>These things can be easily automated. Once the code does not compile or tests are not passing, it should be impossible to merge the code into some sort of main branch and there's no need to waste the reviewer's time to check this.</p><p>The same thing goes with formatting - as a team, you should agree on a set of rules, set up automatic linter and either enforce linting or fail the build on failed linter check.</p><h2><strong>Code review is about code, not about the author's ego</strong></h2><p>As a code review author, remember: it's not about you. It's about the code. What I mean by that is that when someone points out that you've introduced a bug, there's a problem with your architecture, or the code is hard to understand, try not to take it personally. There's an API rule which I try to follow - Assume Positive Intent. It simply means that you should assume the person on the other side has positive intent. This is especially true in the remote world - it's hard to pass the emotions through the review comments. Whenever there's a space for uncertainty whether the person has good intent or is just malicious, always assume the good intent. It's a really small thing, but can massively improve the atmosphere at work. From my experience in almost all cases, people have good intents at work and even when they're wrong or are defending something obviously incorrect, it's possible to calmly talk it through. And most importantly - it always might be you who's wrong. This way you avoid being a jerk.</p><h2><strong>Different approaches</strong></h2><p>I've been working in multiple teams, and each team had way different approach to code reviews. In some teams we had no code review and I think this approach is wrong and no matter what. The code review should be a part of every development team.</p><p>I like the 'deep dive' approach to PRs. So spending a long time reviewing the code and trying to understand all the "what's and why's" of the reviewed code. But there's also a different approach which I also think is fine.</p><p>A "shallow review". It takes a small amount of time and it's just meant to catch some bigger or more obvious problems. The reviewer skims over the code and stops only after something catches his eye. I would recommend this technique for experienced teams where you expect that you can trust all the devs that they know what they're doing and they're taking responsibility for their code - without taking responsibility it's a recipe for tragedy.</p><h2><strong>Code review comments</strong></h2><p>Writing good code review comments is an art. What I've found working the best is 'ask don't tell'. So instead of writing 'Add some unit tests', ask: 'Is it possible to add some unit tests?'. This way you open the gates for discussion. Because it's not always that the other person does not want to do it - sometimes there's a good reason why it wasn't done in the first place. I cannot count how many times I've approached the code with the attitude 'Why the hell is it done in such a weird way? Let me fix it real quick' only to realize an hour or three later why and back off.</p><p>Second thing - be polite. It might be the most important point. If you write a comment 'Why are you using library X instead of Y, are you stupid?', I can almost guarantee you that the other person will either do everything not to do your way or will back off without any further discussion. This is of course a bit exaggerated example, but I've seen cases similarly extreme to this one.</p><p>Let's also try to approach the problem from the other side. What if you've received a comment ending with '...are you stupid?' How can you approach it? I believe there is only one good solution - to talk it through. First I would maybe go to another coworker and ask 'Do you also think that this comment is not appropriate?', to gather a second opinion. When I'm in emotions I don't always think straight so it's good to have someone to ensure you're not overreacting. And then when I get the confirmation, I'd go and talk with someone. Usually it would be a team leader or manager, but if you're feeling brave, you can go to the person directly. If the person is not just 'mean by definition' but is having a bad day, I can guarantee you that they will appreciate your openness a lot. And if they won't, you don't want to work with them and the problem is way deeper.</p><p>Remember to NEVER respond in anger. If you feel like one of the comments touched you deeply, go for a coffee or a quick walk. Go to the nearest window and look at the nature for 3 minutes. Go to another coworker to talk about their kids. Do everything but respond right away. These 3 minutes can save the atmosphere and give you bonus points for your ability to handle difficult situations. Another outcome might be that when you come back with a cookie and coffee, you'll realize that it's not that bad, the reviewer was kind of right and maybe you're the one having a bad day. You won't become a drama queen.</p><p>Another idea for review comments is adding comment size or relevancy to it. One of the techniques is the 't-shirt size' thingy. At the very beginning of each comment, you put a t-shirt size. Let it be:</p><ul><li><p>[s]: you can freely ignore this comment</p></li><li><p>[m]: it's something you should either fix or we should discuss</p></li><li><p>[l]: I'm convinced that there's a bug here</p></li></ul><p>or a lighter version of it, just marking 'irrelevant' comments as nitpick with 'nit'. The difference between a regular comment and a nit comment is that nit comment is usually about the thing where you either want to slightly suggest some change but it's so small that it does not really matter if it stays or not. It might be also something that is a personal preference which you know you should not enforce. It can also be a 'fyi' comment - when a thing is implemented in a correct way, but there's a new feature of the language that allows to simplify it.</p><h2><strong>Have a call besides comments</strong></h2><p>When you have a non-trivial review, it sometimes means that you've received or created multiple comments. Some might be big and important, and others you might be unsure of.</p><p>How you could approach it as a reviewer is the following: first of all sit down and try to understand the code and changes by yourself. Go file by file and review it the best you can. If you don't understand something, write a comment that you don't understand or you're unsure of something. After you've done the whole review, grab the author on a call. From my experience, they tend to be long calls, since you're probably discussing potential problems and solutions. On the call you can walk comment by comment and clear all unknowns, ultimately saving time, and also bringing both of you onto the same page. Sometimes people are willing to just 'do what the reviewer wrote without too much thinking' instead of trying to understand. This way you can also explain the 'why's' behind the comments and your thinking.</p><p>The same goes when you're the author of PR. If you feel like you don't fully understand what reviewer meant by the lengthy and complex comment (or maybe a small and simple one?), just ask them to hop on a call. In the long term it tends to save a lot of time.</p><h2><strong>What should I look for at the code review</strong></h2><p>It also varies a lot from team to team. But usually, some core things remain the same among different teams. These are:</p><ul><li><p>Is the business logic implemented according to requirements?</p></li><li><p>Are there any technical problems with implementation? Could it be improved / optimized / rewritten according to team standards?</p></li><li><p>Typos</p></li><li><p>Tests - are there any tests? Is the coverage enough? Are the tests on a correct level? Are there any cases not covered by tests?</p></li><li><p>Documentation - are the changes documented properly?</p></li><li><p>Overall compliance with team agreements</p></li></ul><p>If you have agreed to have some steps as part of code review (for example, you should manually test each PR before merging) you could try to make a template which will either automatically or manually be pasted into the PR description so that you can publicly tick off points you already did and every reviewer can check it out without explicitly asking.</p><h2><strong>Review in browser vs local IDE</strong></h2><p>Should you review the code in the browser or should you use your local IDE? I've used multiple online platforms: GitHub, Bitbucket, Upsource, Azure DevOps. They all differ by a bit but have one thing in common - they are not IDE. They all don't have the capabilities of your local dev environment. You cannot run tests (okay, you have the CI pipeline, but you get my point). You can sometimes go to the definition of function or variable, but not always. You don't have the full capabilities of mouse-over something to check its type and usages. It's closer to Notepad than IDE.</p><p>But on the other hand, they are all always one click away. You don't have to checkout the branch locally. You don't have to stash your changes. You can click some URL in a browser and that's it, you're ready to review. Also, they sometimes have some nice features like marking which commit / file / batch of changes you've already reviewed and it's saved for later.</p><p>The decision to do a review online or locally might seem to be minor and unimportant, but I think that it's significant. Maybe not crucial, but it might affect the way you write code. For example, let/var vs type name - if you are in your local IDE, you can look it up by hovering the mouse over it. In the browser, it's usually not available. In such case, you should write full type names for readability.</p><p>Another thing is the context. When I review the code, in bigger PRs I like to checkout the code locally and open it in IDE. Then I start reviewig the code in some file and jump around between functions trying to understand the flow. So instead of going top-to-bottom by filename or folder name, I would be guided by the flow of logic. This way I'm reviewing not only the code as is, but I'm also able to build a bigger picture about the changes and business logic inside. I also have easy access to the rest of the codebase, so I can quickly look up code that is not part of reviewed changes.</p><p>So, which one is better? I don't know. I think that it depends on the team you're in. I am comfortable with doing both.</p><h2><strong>Code review size</strong></h2><p>How big my code review should be? It also depends. If you have to wait a week for the review, I'd say 'as big as possible'. If you can get your code reviewed right away, I think that 'as small as possible' might be the answer. But I don't have a definite answer here. Smaller PRs encourage reviewers to do the review faster. I am also aware that unfortunately there are a lot of people who will become over-nitpicky with small PRs and when they get a big PR, they will skim over it, approving the changes without actually reviewing.</p><p>Another factor is the completeness of the feature you're implementing. Should you split PR into (code) and (tests) PRs? Probably no. But depending on the exact details of a feature you're implementing, you could try splitting it into 'vertical slices' - implement a part of a feature that is isolated and a complete piece of code, make a PR, merge, and start working on another small piece. You need to try out different sizes and see what works best.</p><h2><strong>When should I review PRs</strong></h2><p>No later than half a day from any point of the day. Code review is often a blocking process, meaning that if you don't unblock your peers regularly, they get to the point where they have 7 PRs open and they are getting lost in it, not to mention reviewing this all. If you don't like to be interrupted during your day, you have at minimum two points when you're already 'not in the zone'. The first one is when you've just started working in the morning. The second one is after your lunch. Reviewing code is working towards the whole team's goal so you shouldn't wait with review longer than necessary. I try to do the reviews as often as possible. If I'm busy with some critical tasks and someone asks me for a review, then I ask them if the review can wait. It should not happen too often, but in reality it does sometimes. Then they at least know that they'll have to wait. They can find someone else to review their code, wait for me to finish off what I'm doing currently or we could discuss re-prioritization. I'm talking about bigger reviews (let's say 30+ files, 100 lines changed in each) requiring even up to a couple of hours to review. Tiny PRs have a smaller risk of having this problem.</p><h2><strong>When should I create PR</strong></h2><p>Again, this depends on multiple factors. The first one might be CI pipelines and tests. Are there any slow-running tests that can be run only in the pipeline? Or the ones being too slow to run locally? If the answer is yes, then it might be beneficial to create PR right away if you have a pipeline tied to PR creation.</p><p>Another thing to consider is catching big problems early. The earlier in implementation you find that the architecture decision you've made is bad, the less time you've wasted working on a code that will get deleted anyway.</p><p>If none of the above applies, I'd say that you should probably create PR once the feature is fully implemented, or at least 90% implemented (so that you have a chance to implement the missing test while someone reviews the code, or improve insignificant things alongside with PR changes from comments.</p><h2><strong>Pair programming - can we skip the review?</strong></h2><p>What if you pair-programmed a feature? Can it be treated as already reviewed?</p><p>The answer I give to this question is "probably yes". It of course depends on how cautious you are while coding. In some cases, a quick self-review would be beneficial.</p><h2><strong>Self-review</strong></h2><p>I like self-reviewing the code before I hand it over to my teammates. It has two main benefits:</p><ol><li><p>You gain credibility in your team - once you create a PR, there's a high level of trust that your code works. It has advantages in situations like yearly reviews where you can bring it up.</p></li><li><p>You can catch a lot of things by yourself, without even bringing another person to the table. You save everyone's time, and once reviewers see that you're not wasting their time submitting half-baked reviews, they at least should appreciate it and make the review faster.</p></li></ol><p>So, how to do it? In the same way as you would do any other review. Grab your favourite tool, create a draft PR (or some equivalent of that) and go file by file, looking for potential problems, leftover code and typos.</p><p>I don't think that self-review can fully replace second pair of eyes looking at your code, but it's a good habit.</p><p>Self-review has also a second benefit. You don't need another person to do it. What it means is that it's a perfect tool for some 'emergency' situations when you don't have a reviewer. This technique saved me some time and stress when I was doing quick hotfixes outside of regular business hours when no one else was working. To do a self-review, create a PR as you would normally do, and go for a 5-minute break. This way you are leaving 'the zone' and when you get back to your computer to make a review, you are as close to being an independent reviewer as possible.</p><h2><strong>Should author of PR fix code outside of theirs changes?</strong></h2><p>It often happens that there's a piece of code that needs to be refactored but is not a part of current changes. These are either some TODOs, code containing bugs, or the whole module written in the 'old way'. It happens that you receive a comment asking &#8216;Hey, could you fix this TODO while you&#8217;re here?&#8217; Should you touch it or not?</p><p>As usual, it depends. If it's a small and low-risk change, do it. Maybe not all of them if there are multiple, but just a few. Try to make the surroundings just a little bit nicer. If you don't do it, the code will slowly rot and turn into unmaintainable spaghetti over the years.</p><p>Sometimes the changes are too big to address them right away. What you could do is create a ticket or write down on your list 'refactors to be done while I'm bored'. And then when you have some slack time between two tasks just start grabbing things from that list.</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://wqplease.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading :wq please! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Let's talk about code editors]]></title><description><![CDATA[Visual Studio, Rider, VSCode, neovim and my dilemmas around them]]></description><link>https://wqplease.com/p/lets-talk-about-code-editors</link><guid isPermaLink="false">https://wqplease.com/p/lets-talk-about-code-editors</guid><dc:creator><![CDATA[Paweł Zelmański]]></dc:creator><pubDate>Tue, 27 Aug 2024 07:00:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!JSP3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb57d869-fc23-4531-9be5-04e791927321_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Recently I've started a new job. So far I've been mostly writing C# for the past 10 years. Ten years ago if you wanted to write C#, you were using Visual Studio. It was the only thing that was used to write C# code (at least the only one I knew back then). There was not much space to think about any other editor. Or maybe it was only me being a bit too young to experiment with different editors. I'm not sure about that. But back then the .NET ecosystem was still young and what's more important, it was closed-source. The only choice about editor we had back then was whether we want to use ReSharper within Visual Studio or not. It was, and still is, a tool from JetBrains expanding capabilities of Visual Studio. VS always felt for me like it was a generic tool, without really good integration with .NET ecosystem. That's where JetBrains came in with their solution. They took what Microsoft did, adding a nice features, like partial fuzzy search on filenames. So when I was looking for a file named&nbsp;<code>CustomerDataHandler</code>, I could type&nbsp;<code>cgh</code>&nbsp;or&nbsp;<code>CustDaHa</code>&nbsp;or something similar and ReSharper was able to find that file. It also had way more code hints than plain VS. If you wrote some foreach function doing filtering of a list, it suggested that you maybe want to shorten it to a simple&nbsp;<code>.Where()</code>&nbsp;one-liner. But there were also downsides of ReSharper - it was slowing down the whole VS significantely. It was the price you had to pay for all this nice features. And some developers decided that it's not worth it. I was not one of them - I totally fell in love with ReSharper.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JSP3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb57d869-fc23-4531-9be5-04e791927321_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JSP3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb57d869-fc23-4531-9be5-04e791927321_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!JSP3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb57d869-fc23-4531-9be5-04e791927321_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!JSP3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb57d869-fc23-4531-9be5-04e791927321_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!JSP3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb57d869-fc23-4531-9be5-04e791927321_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JSP3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb57d869-fc23-4531-9be5-04e791927321_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cb57d869-fc23-4531-9be5-04e791927321_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1415193,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!JSP3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb57d869-fc23-4531-9be5-04e791927321_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!JSP3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb57d869-fc23-4531-9be5-04e791927321_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!JSP3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb57d869-fc23-4531-9be5-04e791927321_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!JSP3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb57d869-fc23-4531-9be5-04e791927321_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Let's fast-forward to today. Today I'm writing F# code in Rider, a .NET IDE from JetBrains. But since the project I'm working on is quite big and F# is still a niche, the whole experience if far from ideal. It's better than 5 years ago, but still it's quite often that the code highlight just stops working or function references do not load.</p><p>Going back from nice and cosy C# environment where the whole IDE guided me through almost everything into the dark pits of F# development where you're kind of left by yourself is sometimes hard. There are times when I feel like I would be writing some C at the university many years ago - instead of having a nice 'it seems like you forgot to close the parenthesis somewhere around this line' I get the 'AAAAAAAAAAAAAAA THIS CODE IS ALL WRONG, THERE IS SOME ERROR SOMEWHERE IN THIS FILE BUT I DONT KNOW WHERE SO I WILL UNDERLINE EVERYTHING IN RED' which is not too helpful. But this is also an opportunity. Opportunity to step back and think about my needs and wants when it comes to my code editor.</p><h2><strong>What I'm using today</strong></h2><p>The project consists of backend in F# and frontend in React. Currently I'm using at maximum 60% of Rider's capabilities (the remaining 40% is not working because it's F# not C#) for backend projects, and nvim for frontend project. I'm not fully happy about this setup since it's not convenient to switch back and forth between this two editors. I should call Rider good enough, but after I've experienced the unlimited possibilites of customizing my experience with nvim, I don't enjoy the plainness of Rider. I have vim motions plugin installed, but it's only part of&nbsp;<code>the neovim experience&#8482;</code>. On the other hand, the amount of hours I had to put into my nvim config might be discouraging, especially at work. In my free time it's fine for me to spend 4 hours tinkering with my configs. At work it's not feasible to do it on a regular basis. I just need something that works out of the box, or at least something as close to this as possible.</p><h2><strong>Why I don't use nvim for F#</strong></h2><p>There are basically two reasons why I still stick to Rider as my main IDE: debugging and tests. I already can and do sometimes run all projects from console anyway. I have configured nvim so that I have a file tree, fuzzy find and jumping around the code using LSP so that I can go to definition or list all usages.</p><p>It is technically possible to run debugger inside nvim, but I still have something I would call a mental bareer - in Rider it just works perfectly. I would have to spend probably tens of hours trying to set up debugging and learning it, with no guarantee that I won't decide that it makes no sense and return to Rider. Especially that I've already looked into it and it seems like there are a lot of people using nvim regularly who dropped this idea of working in .NET environment in nvim because of Rider being too good out of the box.</p><p>The second problem I imagine I would have is running tests. I mean the ability to run single test from under the cursor. It should be possible to set it up in a way that I could just run the test from under cursor or in the worst case I'd just copy the name of the test I'd like to run and pass it at a parameter to console command, but hitting (CMD + TR) which just works at all times out of the box is too convenient.</p><h2><strong>Visual Studio Code</strong></h2><p>There is a middle ground between a bulky, performance-heavy editor (Rider) and configuration-heavy editor (neovim) and it's VS Code. At some point of my career I was amazed by how well vscode actually works. You want to run a python script? Just open the file, click 'yes, I want to install everything' on a popup asking if you want to run this file, and then just hit F5. At least this is my experience with vscode. It just works. But the problem I have with vscode is that it's another editor with its unique properties and shortcuts that I'd have to learn.</p><p>I was already very comfortable using mostly Rider for my daily work, with some support of vscode for some more generic and non-C# tasks. Then I've discovered neovim and decided to spend a lot of time to not only configure it, but also to learn how to use it properly. To learn unique features, find plugins, set up shortcuts and them learn them all. And the most importantly, to learn vim motions. And at the moment I'm more than happy with being fluent in nvim, but the problem with having this knowledge and skills is that I don't want to go back to the comfy, working out of the box editors where I can change only this much things and the editor's plugins list feels short.</p><h2><strong>I'm unhappy about my current editors</strong></h2><p>When I use neovim, I miss the approachability of Rider or vscode. When I use Rider, I miss the speed and lightness on nvim or vscode. When I use vscode, it all feels meh. It's the thing made of compromises. It's okay in everything but does not excel in any of my needs. Or maybe it's only my image of vscode which is untrue? But I don't feel like trying to change my mind here. I don't like vscode. Maybe it's the overwhelming amount of things that are happenig in the background? I'm not sure, right now I am unable to name it, I just don't feel like using vscode more than I have to.</p><p>I think that the reason for this whole dilemma is the fact that at some point I've decided to pick up nvim. If I wouldn't do that, I would be 'stuck' with Rider, not knowing that there is more than that. But now I know and this leads to me being lost. Maybe one day I will sit down and start tinkering with the neovim and try to set it in a way that it mimicks Rider's nicest features, at least to some extent. So that it will be good enough to replace it. Way better at some things, and just okay at others.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://wqplease.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading :wq please! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Managing dotfiles with chezmoi]]></title><description><![CDATA[Ain't we all need our dotfiles in git?]]></description><link>https://wqplease.com/p/managing-dotfiles-with-chezmoi</link><guid isPermaLink="false">https://wqplease.com/p/managing-dotfiles-with-chezmoi</guid><dc:creator><![CDATA[Paweł Zelmański]]></dc:creator><pubDate>Wed, 31 Jul 2024 12:55:44 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!W7cW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fecc0705c-fc15-48c2-82bb-26f431eb57d7_1478x662.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I had a problem. Not a big deal, but slightly annoying. Each time I've changed or formatted one of my laptops (which happened a couple of times recently) I had to re-create my dotfiles from scratch. I would not call it a big deal since I don't have large configs, but it still took some time to set everything up.</p><p>Zsh, plugins, gitconfig and neovim. For neovim I already have a separate repo to store my config but it was still one more thing to remember.</p><p>I've decided to do a research (again) and this time I've managed to come up with a solution. There are multiple ways to approach this problem. Some of them require additional applications, and also you could use bare git repository.</p><p>I've went through a couple of solutions and found out that&nbsp;<strong>chezmoi</strong>&nbsp;works best for me.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!W7cW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fecc0705c-fc15-48c2-82bb-26f431eb57d7_1478x662.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!W7cW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fecc0705c-fc15-48c2-82bb-26f431eb57d7_1478x662.png 424w, https://substackcdn.com/image/fetch/$s_!W7cW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fecc0705c-fc15-48c2-82bb-26f431eb57d7_1478x662.png 848w, https://substackcdn.com/image/fetch/$s_!W7cW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fecc0705c-fc15-48c2-82bb-26f431eb57d7_1478x662.png 1272w, https://substackcdn.com/image/fetch/$s_!W7cW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fecc0705c-fc15-48c2-82bb-26f431eb57d7_1478x662.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!W7cW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fecc0705c-fc15-48c2-82bb-26f431eb57d7_1478x662.png" width="1456" height="652" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ecc0705c-fc15-48c2-82bb-26f431eb57d7_1478x662.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:652,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:445624,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!W7cW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fecc0705c-fc15-48c2-82bb-26f431eb57d7_1478x662.png 424w, https://substackcdn.com/image/fetch/$s_!W7cW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fecc0705c-fc15-48c2-82bb-26f431eb57d7_1478x662.png 848w, https://substackcdn.com/image/fetch/$s_!W7cW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fecc0705c-fc15-48c2-82bb-26f431eb57d7_1478x662.png 1272w, https://substackcdn.com/image/fetch/$s_!W7cW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fecc0705c-fc15-48c2-82bb-26f431eb57d7_1478x662.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>Managing dotfiles with chezmoi</strong></h2><p>My needs are not big. I just want to commit selected dotfiles, and then be able to replace and edit them. And I want to do it across multiple machines. What chezmoi gives me is simplicity.</p><p>All you need to do is to install chezmoi, and then follow the&nbsp;<a href="https://www.chezmoi.io/quick-start/#concepts">quick start guide</a>, like so:</p><pre><code><code>chezmoi init
chezmoi add ~/.zshrc
chezmoi cd
git commit -m 'init'
git remote add # set remote to your repo
git branch -M main
git push
</code></code></pre><p>and that's it. Now you have your&nbsp;`<code>.zshrc`</code>&nbsp;backed up in your repo. In order to have this file on another computer, all you need to execute is</p><pre><code><code>chezmoi init https://github.com/$GITHUB_USERNAME/dotfiles.git
chezmoi diff # to see what changes will be applied
chezmoi apply
</code></code></pre><p>and that's it. You have successfully synced&nbsp;`<code>.zshrc`</code>&nbsp;file between two computers. And now, you can type</p><pre><code><code>chezmoi edit ~/.zshrc
chezmoi diff #to check what's going to change
chezmoi apply
chezmoi cd
git commit -m 'changes'
git push
</code></code></pre><p>and in case you'd edit the file directly (so you forgot to edit it via chezmoi), there is a&nbsp;<code>re-add</code>&nbsp;parameter, so you just type</p><pre><code><code>chezmoi re-add
chezmoi cd
git commit -m 'changes'
git push
</code></code></pre><p>and all good!</p><p>Another good thing about chezmoi is that it works out of the box with the whole directories - I wanted to sync my&nbsp;`<code>~/.oh-my-zsh`</code>&nbsp;and all it required was</p><pre><code><code>chezmoi add ~/.oh-my-zsh
</code></code></pre><p>In case you'd want to use your own editor (like nvim for example) in&nbsp;`<code>chezmoi edit`</code>, all you need to do is to set env variable:</p><pre><code><code>export EDITOR="nvim"
</code></code></pre><p>Managing per machine config</p><p>I'm using the same .gitconfig on my personal and on my work laptop. Obviously I want to have a different email on both. In order to do this, I need to convert `~/.gitconfig` file in chezmoi from file to template.</p><pre><code><code>chezmoi add --template ~/.gitconfig
</code></code></pre><p>and change entry in `.gitconfig` to take email from local variables:</p><pre><code><code>[user]
    email = {{ .email | quote }}
</code></code></pre><p>Now chezmoi will tell me if the email is missing, on&nbsp;`<code>chezmoi diff`</code>&nbsp;or&nbsp;`<code>chezmoi apply`</code>&nbsp;command:</p><pre><code><code>chezmoi: .gitconfig: template: dot_gitconfig.tmpl:7:12: executing "dot_gitconfig.tmpl" at &lt;.email&gt;: map has no entry for key "email"
</code></code></pre><p>All I need to do is to go to&nbsp;`<code>~/.config/chezmoi/chezmoi.toml`</code>&nbsp;and add email entry, like so:</p><pre><code><code>[data]
    email = "work.email@email.com"
</code></code></pre><p>and ofc do exactly same thing on personal computer with personal email. There are ways of further automating things, but I'm not there yet - you can create variables and whole configs per machine name or os type, and even download packages via package manager.</p><p>Chezmoi has everything or even slightly more than I was expecting from it, while keeping it stupid simple. I cannot recommend it enough.</p><p></p><p>You can find my chezmoi repo <a href="https://github.com/pzelmanski/dotfiles">here</a></p>]]></content:encoded></item><item><title><![CDATA[End-to-end ownership of the project]]></title><description><![CDATA[An article on how to be a responsible developer]]></description><link>https://wqplease.com/p/end-to-end-ownership-of-the-project</link><guid isPermaLink="false">https://wqplease.com/p/end-to-end-ownership-of-the-project</guid><dc:creator><![CDATA[Paweł Zelmański]]></dc:creator><pubDate>Thu, 30 May 2024 17:07:59 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/71ee9c32-39a4-4382-8504-032f3ea5f917_1920x1920.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Don&#8217;t have time to read the whole article? Here&#8217;s the summary:</p><ul><li><p>Full ownership: you build it, you own it</p></li><li><p>My users (or metrics) will tell me when things are off</p></li><li><p>Single owner of the project</p></li><li><p>Canary releases</p></li><li><p>Simplicity over everything</p></li><li><p>No time for bullshit meetings</p></li></ul><div><hr></div><p>For the past almost three years I've been working for a company that I could call a seasoned startup. The company was 10-ish years old, but a lot of things were organized in a way similar to how startups are organized. Bus factor of one - if one person disappears, we have a problem. Not exactly a paralyzing problem since there was usually more than one person which more or less knew the domain, but usually a single person was committing to the particular service.</p><p>It also had some other consequences, like having not much support from teammates in daily work. I've been in so-called backend team, which during a couple of years of existing, slowly and steadily transformed into a core team. We did backend. We did frontend. We did data processing. We did everything, we owned every most crucial piece of software. Okay, maybe not everything (yet!), but we owned a big piece of the company's software, and it was just working. I've owned somewhere around 20 different services from various domains, which means that I was the main person responsible for fixing them when something went wrong, sometimes also during weekends or even vacations.</p><p>It took years of slow and consistent work, to prove that we're able to deliver. I want to describe my experience of working in this team, what I've learned, and how we worked.</p><h2><strong>Full ownership: you build it, you own it</strong></h2><p>This is the most important prerequisite to this all. I had to take full ownership of the services I maintain. I was the only active contributor to the services I owned. Why? Because I will not fix things on Saturday evening that someone else broke by their carelessness. Of course, we had a code review process so I was not left alone, but it was my responsibility to test it before deployment, and if something went wrong, it was also my responsibility to fix it. We had no strict SLAs, but usually I fixed any problems within minutes or hours. It was possible thanks to good monitoring.</p><h2><strong>My users (or metrics) will tell me when things are off</strong></h2><p>How do you know that your service is not working the way it should work? Usually the answer could be straightforward: you open the application and check if it works or not. In my case it was not that simple. One of the areas I was solely responsible for was crawlers. Crawling of the internet is hard and unpredictable. I feel like I've seen it all when it comes to the ways of handling errors and failures, and some of them were creative. Response with HTTP code 200 and some carefully, hand-crafted error page. Multiple chains of redirects. HTTP 403 instead of 404 on a page which has been removed. Geo blockers of all sorts. Different sorts of anti-bot protections. Even honeypots for crawlers!</p><p>In such a volatile environment it's really hard to know quickly if something is broken or not, especially in distributed services. Crawlers were processing hundreds of messages per second, so the failure rate was also high at all times. Exceptions were thrown all the time, in every second. Some of them were expected and recoverable, and some of them were unknown unknowns - the things we haven't thought of. Known exceptions are fine - we know what is happening, and can easily and safely determine without a huge rush what's up and fix it relatively slowly. With the unexpected ones, it's a different story. I had a lot of alerts on multiple metrics, such as:</p><ul><li><p>too many unexpected exceptions during the last hour</p></li><li><p>no successful crawl globally for the past hour</p></li><li><p>no successful crawl in one country for the past 4/6/12h (depending on how often we were running crawlers)</p></li><li><p>avg. amount of successful crawls over the past 3 days vs the past 7 days</p></li><li><p>queue count exceeded the threshold</p></li><li><p>messages on the queue are older than 4h</p></li><li><p>queue message count exceeding some threshold</p></li><li><p>and many more</p></li></ul><p>Some of the alerts were more important than others. After some time of working on the project, I had a feeling when it's something that needs immediate attention and when it's a problem that can wait for a day or three, or maybe even I expect that it will resolve by itself. Because sometimes it was totally out of my control - what if the website we're trying to crawl is down? Even worse - what if it's down because we're crawling it a bit too hard? Then the only solution to fix this problem is to wait. And since they might not like the fact that we're crawling them, we cannot write an email saying</p><blockquote><p>Hey, we're crawling you and it seems like our crawlers are failing, could you check it and let us know what's up?</p></blockquote><p>But also there were times when I made some changes, deployed them to prod and I was worried that I might have introduced a bug. And it did happen from time to time. But since I had a lot of alerts, usually I knew right away and was able to fix &amp; deploy the change quickly.</p><h2><strong>Single owner of the project</strong></h2><p>I feel like this might be a very underrated thing. It forces people (at least the right people) to shift their thinking. If there is only one contributor to the project, after some time of working you know the whole codebase by heart. You know all the why's and where's (okay, maybe not all). I'll mention crawlers again. I've spent multiple months working almost exclusively on maintaining and improving crawlers. It taught me a lot about the domain and the project. I could understand the project deeply. I got to experience the consequences of my code and design decisions. I had no one else to blame for bugs other than me. When I implemented a feature and I did not fully understand the why's behind it, I usually had to re-do the whole thing a month later, since "You know what? In my head it was working differently, could you change it please?". And such reality taught me a lot. I shifted my thinking about features from "what is needed today" to "today they ask for X, they need Y, and it probably will evolve into Z in the future". Let's take a look at the simple example. Our crawlers had pretty advanced configuration options, allowing for example to set retries on some exceptions, and also to specify which error codes we don't want to retry. When the requirement came in that 'website X is returning 404 (not found) instead of 429 (too many requests), could you please make a config so that we can retry 404s?' I transformed it in my head into 'we need a mechanism of specifying HTTP code into action'. Whenever it was possible, I was trying to make features as generic as possible. Instead of implementing "a flag to interpret 404 as 429" I've implemented "When&nbsp;, then&nbsp;". And pretty often I was right - usually weeks or months later another requirement came in: "Hey, do you remember that site that was doing 404 on too many requests? We have a new similar website, but they're returning 403s. Help pls". And if I anticipated the future requirements correctly, I was able to save usually a couple of days of development and testing.</p><h2><strong>Canary releases</strong></h2><p>Crawlers were running in multiple countries. Some countries were more important to work continuously than others. In other words, we could afford some countries to be broken, and others we could not. So for bigger releases, I've used canary releases. By bigger releases I mean changes in code that I knew that were touching multiple areas, so they can break in multiple places. What I did was I released one country, and then waited for a day or three, depending on importance, size, deadlines and my current capabilities. After the period, I contacted all stakeholders, I've checked the logs, and if everything was looking good, I deployed another batch of countries, this time more significant ones. And for the very last batch, I was leaving the most important and the biggest countries. This way I had multiple checkpoints when I was making sure that everything was fine. Also, it made me extend the deployment period over days or even weeks, so the first country that got deployed had already a significant amount of data proving that the code works fine, even before the last batch of canary has been deployed to prod.</p><h2><strong>Simplicity over everything</strong></h2><p>Do you know that feeling when you sit down and try to understand the code, but the number of virtual classes, abstractions, and inheritance makes you jump from file to file and you need to spend hours debugging? I know it very well. Usually the story behind such code in places I've worked in was that there was this 10x developer who was brilliant. He coded the core of the project over the weekend, then over the next 3 months he built more features on top of that, and then he decided it was time to leave the company. And guess who needs to maintain this little monster? Not the person who wrote it!</p><p>When I'm alone in the project, I have a lot of room to do things my way. And my way means simple. I like simple code. If there are too many things going on at once, I cannot be productive, since I'm blocked by the complexity. I need to dive deep between smart functions, doing smart things in non-explicit way so that the code is more compressed. But it does not mean that I don't enjoy it when things are one-liners. Let's take regex as an example. Whenever possible, I try to avoid regex. I'd rather write 10 lines of code, doing step-by-step filtering over 1 line of regex doing things all at once (of course there's more to it, like performance, but it's just a simple example). And the only reason is that I know that it will be hard for me to wrap my head around the regex in 6 months. And if it's step by step, then it's easier to not only debug but also read the code.</p><h2><strong>No time for bullshit meetings</strong></h2><p>This requires some kind of bravery. To be able to hop into the meeting, and ask 'Hey folks, do you need me at this meeting? I have some deadlines to meet and I feel like I will not contribute to this discussion'. Of course, it does not mean that I was allowed to not participate in any meeting, it's just that I had a sense that my time was worth something - after all, I was the only person responsible for delivering my project on time. And I had to explain myself why I missed the deadline. And if people were pushing for me to do things that I felt were wasting my time and it was unjustified by the business requirements, I was escalating it above so that the 'business owner' of the project (someone who wanted my feature to be completed) could escalate and negotiate on my time allocation. I was just being clear that doing thing X will delay thing Y, that's it.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://wqplease.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading :wq please! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[FsCheck 3: Property-based testing in C#]]></title><description><![CDATA[I would assume that we can agree on the fact that unit testing is good and helpful. At least I hope that this is the common opinion. But&#8230;]]></description><link>https://wqplease.com/p/fscheck-3-property-based-testing-in-c-3ba4a2a50388</link><guid isPermaLink="false">https://wqplease.com/p/fscheck-3-property-based-testing-in-c-3ba4a2a50388</guid><dc:creator><![CDATA[Paweł Zelmański]]></dc:creator><pubDate>Thu, 18 Apr 2024 05:01:49 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/6d5396b8-cbf7-4e73-b6cb-0c02a81d3904_800x572.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I would assume that we can agree on the fact that unit testing is good and helpful. At least I hope that this is the common opinion. But what if I told you that there is a way to level up your testing? FsCheck is a library that allows you to easily create property-based tests in C# (and F#, but I don&#8217;t cover it in this article). In this article, I&#8217;ll show you how to use FsCheck 3 to write property-based tests in C#. We&#8217;ll start with simple generators and move to more complex ones. I got inspired to write this article after I discovered that FsCheck 3 has absolutely no documentation, so it&#8217;s as useful for me as it is for you. Let&#8217;s start with a short intro to property-based testing.</p><p><em>All the code for this post is available on my <a href="https://github.com/pzelmanski/FsCheck3-Examples">GitHub</a>.</em></p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rY5u!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa70e4ba9-54f8-477b-92ff-8b79ae0a751d_800x572.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rY5u!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa70e4ba9-54f8-477b-92ff-8b79ae0a751d_800x572.jpeg 424w, https://substackcdn.com/image/fetch/$s_!rY5u!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa70e4ba9-54f8-477b-92ff-8b79ae0a751d_800x572.jpeg 848w, https://substackcdn.com/image/fetch/$s_!rY5u!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa70e4ba9-54f8-477b-92ff-8b79ae0a751d_800x572.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!rY5u!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa70e4ba9-54f8-477b-92ff-8b79ae0a751d_800x572.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rY5u!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa70e4ba9-54f8-477b-92ff-8b79ae0a751d_800x572.jpeg" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a70e4ba9-54f8-477b-92ff-8b79ae0a751d_800x572.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rY5u!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa70e4ba9-54f8-477b-92ff-8b79ae0a751d_800x572.jpeg 424w, https://substackcdn.com/image/fetch/$s_!rY5u!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa70e4ba9-54f8-477b-92ff-8b79ae0a751d_800x572.jpeg 848w, https://substackcdn.com/image/fetch/$s_!rY5u!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa70e4ba9-54f8-477b-92ff-8b79ae0a751d_800x572.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!rY5u!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa70e4ba9-54f8-477b-92ff-8b79ae0a751d_800x572.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><h3>What is property-based testing?</h3><p>Let&#8217;s say that you have a system that allows you to order fruits and tools. Strange combination, I know, but bear with me. An order has some properties that need to be fulfilled to be valid. For example, items ordered need to have count of at least one. Or the total price has to be non-negative. Or the name needs to be one of the predefined values of items present in the system. To unit test order handling, you would have to come up with a bunch of test cases. But what if I told you that all you need is to define a set of rules that the order needs to follow, and you will have auto-generated data, each time being different? This is what property-based testing is about. You create generators that will create data for you, and then in your test, you just say &#8220;Give me a list of orders&#8221;, and generators will create them for you. And it will run (by default) 100 times, each time with different input data. This way you can test more cases with less effort (usually). I have to admit that not always it&#8217;s easier to write property-based tests, as they require a lot of upfront thinking, but they are worth it when you have a complex calculations system on a big set of data with many different combinations and it would be hard to cover all the possible cases with unit tests. Now let&#8217;s get to the actual code.</p><h3>Our first property-based test</h3><pre><code>[Fact]
public void SimpleTest()
{
    Prop.ForAll&lt;int&gt;(x =&gt; x &gt;= 0).QuickCheckThrowOnFailure();
}</code></pre><p>Let&#8217;s try running the test:</p><pre><code>System.Exception
Falsifiable, after 1 test (0 shrinks) (10979290545183390790,6994218916298689381)
Last step was invoked with size of 2 and seed of (11283820414104960607,2916118026088292885):
Original:
-1
with exception:
System.Exception: Expected true, got false.</code></pre><p>This test is failing. Why? Because we use the default generator for integers, which is not limited to non-negative numbers. Let&#8217;s break down what we can see here:</p><ul><li><p><code>Falsifiable, after 1 test</code> - this means that our test failed after only one try</p></li><li><p><code>(0 shrinks)</code> - shrinking is an interesting, however advanced topic. It's a process of trying to find the smallest input data that will still make the test fail. For simple types it works pretty well out of the box: for numbers, it tries to find the case closest to zero, for lists, it will try to find the shortest list that still makes the test fail.</p></li><li><p><code>(10979290545183390790,6994218916298689381)</code> - this is the seed that was used to generate the data. This one's super useful if a test fails after 60 runs. Using the seed you can run the test again with the failing data right away, skipping the successful runs.</p></li><li><p><code>Original: -1</code> - this is the input data for which the test failed</p></li></ul><h3>Simple type generator</h3><p>Okay, so we have a failing test, but how to make it pass? We need to write a generator that will generate only non-negative numbers.</p><pre><code>public static class SimpleTypesGenerators
{
    // this is a generator that will generate only non-negative numbers
    public static Arbitrary&lt;int&gt; OverrideIntArb() =&gt;
        ArbMap.Default.GeneratorFor&lt;int&gt;().Where(x =&gt; x &gt;= 0).ToArbitrary();
}</code></pre><pre><code>public class SimpleTypesTests
{
    // Another way of running PBT, also specifying the generator
    [Property(Arbitrary = new[] { typeof(SimpleTypesGenerators) })]
    public Property Generator_OverridingInt(int p)
    {
        return (p &gt;= 0).ToProperty();
    }
}</code></pre><p>Let&#8217;s run the test and see the output:</p><pre><code>Ok, passed 100 tests.</code></pre><p>As you can see, not only did the test pass, but it also ran 100 times. This is the default number of runs for FsCheck.</p><p>Now let&#8217;s try changing the condition in the test to <code>p &lt;= 50</code> and see what happens:</p><pre><code>FsCheck.Xunit.PropertyFailedException
Falsifiable, after 61 tests (0 shrinks) (7331799476358367170,1508067056380619747)
Last step was invoked with size of 61 and seed of (6419148108620199268,14754055774596855233):
Original:
57</code></pre><p>As you can see, the test failed after 61 runs, and the input data failing the test was 57.</p><h3>Generators of custom&nbsp;types</h3><p>Let&#8217;s try to create a generator for a custom record type:</p><pre><code>public record PositiveInt(int Value);

public static class SimpleTypesGenerators
{
    // Generator, used to generate the values
    public static Gen&lt;PositiveInt&gt; GetPositiveInt() =&gt;
        ArbMap.Default.GeneratorFor&lt;int&gt;().Where(x =&gt; x &gt;= 0).Select(x =&gt; new PositiveInt(x));
}</code></pre><p>This is the generator for the <code>PositiveInt</code> record. To be able to use it in tests, we need to create an <code>Arbitrary</code> from it:</p><pre><code>public static class SimpleTypesGenerators
{
    public static Arbitrary&lt;PositiveInt&gt; PositiveIntArb() =&gt;
        Arb.From(GetPositiveInt());}</code></pre><p>Now we can use it in our tests:</p><pre><code>[Fact]
public void AnotherWayOfRunningTests()
{
    var prop = Prop.ForAll&lt;PositiveInt&gt;(p =&gt; p.Value &gt;= 0);
    prop.Check(Config.Default);
}</code></pre><p>And the test is passing.</p><h3>More complex&nbsp;types</h3><p>Now let&#8217;s try to create a generator for order. First of all, here&#8217;s the order object:</p><pre><code>public record Order(Guid Id, String Name, OrderStatus Status, int Quantity, decimal Price);

// enum with possible statuses
public enum OrderStatus
{
    New,
    InProgress,
    Done
}

// list of possible names for orders - names from outside of the list are illegal and should not be generated
private static readonly List&lt;string&gt; FruitNames = new List&lt;string&gt; { "Apple", "Banana", "Cherry", "Elderberry" };
private static readonly List&lt;string&gt; ToolNames = new List&lt;string&gt; { "Axe", "Hammer", "Screwdriver", "Wrench" };</code></pre><p>Let&#8217;s try to quickly create a generator to generate orders:</p><pre><code>public static class ComplexTypesGenerators
{
    // Choose one of the statuses - you can specify only a subset of the statuses you want to generate
    // if needed, you can split it into multiple generators, having different subsets of statuses
    public static Gen&lt;OrderStatus&gt; OrderStatusGenerator()
    {
        return Gen.Elements&lt;OrderStatus&gt;(OrderStatus.New, OrderStatus.InProgress, OrderStatus.Done);
    }

    // Generator returning a name from two lists, with weight attached.
    // This means that 1/4 of the time it will generate a name from the list of tools,
    // and 3/4 it will be the name from the list of fruits
    public static Gen&lt;String&gt; NameGenerator()
    {
        return Gen.Frequency(
            (3, Gen.Elements&lt;String&gt;(FruitNames)),
            (1, Gen.Elements&lt;String&gt;(ToolNames))
        );
    }

    // Generator for price - generates a number between 1 and 100, and then converts it to decimal
    public static Gen&lt;decimal&gt; PriceGenerator()
    {
        return Gen.Choose(1, 100).Select(x =&gt; x * 1.0m);
    }

    // I have found this old Linq syntax to be the most readable.
    // As an alternative, you can also use the syntax from the function below
    public static Gen&lt;Order&gt; OrderGenerator()
    {
        return 
            from name in NameGenerator()
            from qty in Gen.Choose(1, 10)
            from price in PriceGenerator()
            from status in OrderStatusGenerator()
            select new Order(Guid.NewGuid(), name, status, qty, qty * price);
    }

    // This is an alternative way of writing the same generator as above,
    // but chaining selects gets unreadable quickly
    public static Gen&lt;Order&gt; AnotherWayOfOrderGeneration()
    {
        return NameGenerator().SelectMany(name =&gt;
        {
            return Gen.Choose(1, 10).Select(qty =&gt;
                new Order(Guid.NewGuid(),
                    name,
                    OrderStatusGenerator().Sample(1, 1).Single(),
                    qty,
                    qty * PriceGenerator().Sample(1, 1).Single())
            );
        });

    public static Arbitrary&lt;Order&gt; OrderArb() =&gt;
        Arb.From(OrderGenerator());</code></pre><p>And now let&#8217;s write some tests using the above generator:</p><pre><code>public class ComplexTypesTests
{
    private readonly ITestOutputHelper _testOutputHelper;

    public ComplexTypesTests(ITestOutputHelper testOutputHelper)
    {
        _testOutputHelper = testOutputHelper;
    }

    [Fact]
    public void SingleOrderTest()
    {
        var prop = Prop.ForAll&lt;Order&gt;(order =&gt;
        {
            _testOutputHelper.WriteLine(order.ToString());
            Assert.True(order.Price &gt;= 1);
        });
        prop.Check(Config.QuickThrowOnFailure
            // Specify the generator class for the test
            .WithArbitrary(new[] { typeof(ComplexTypesGenerators) }));
    }

    [Fact]
    public void OrdersListTest()
    {
        // Having a generator for a complex type, we have a generation of a list of such types for free.
        Prop.ForAll&lt;List&lt;Order&gt;&gt;(orders =&gt;
            {
                _testOutputHelper.WriteLine(orders.Count.ToString());
                // Some asserts
            })
            .Check(Config.QuickThrowOnFailure
                .WithArbitrary(new[] { typeof(ComplexTypesGenerators) }));
    }
}</code></pre><p>A few first outputs of the tests:</p><p><code>SingleOrderTest</code>, writing the single object to the console:</p><pre><code>Order { Id = ec0b5817-d124-4211-ace9-7a52253a6766, Name = Banana, Status = InProgress, Quantity = 10, Price = 280,0 }
Order { Id = be828369-ff32-4b5a-b6b1-c9dfc17650bb, Name = Cherry, Status = Done, Quantity = 1, Price = 11,0 }
Order { Id = 93d6c498-851d-4118-b727-a0f1f812004c, Name = Cherry, Status = New, Quantity = 9, Price = 81,0 }
Order { Id = 86df039e-3975-4200-b7c4-e68443218584, Name = Banana, Status = Done, Quantity = 1, Price = 54,0 }
Order { Id = b28c0be2-d9fb-4e9c-9b4e-0a0e7ed48c76, Name = Banana, Status = InProgress, Quantity = 10, Price = 60,0 }
Order { Id = de99993a-bc4c-4c51-826e-99e88f30beec, Name = Banana, Status = New, Quantity = 7, Price = 273,0 }
Order { Id = 17d2b4c9-87be-40bb-92f2-e6be6bcca2d5, Name = Axe, Status = New, Quantity = 1, Price = 18,0 }
Order { Id = 1b49b225-fa60-4424-8e93-923b63846d3e, Name = Apple, Status = InProgress, Quantity = 4, Price = 228,0 }
Order { Id = 87a52d4d-9ca8-448c-b643-76751a23c671, Name = Elderberry, Status = InProgress, Quantity = 5, Price = 50,0 }
Order { Id = 0581488e-499c-4cc5-8928-d5ffab089b79, Name = Banana, Status = Done, Quantity = 5, Price = 195,0 }
Order { Id = 923d5fce-3ea8-46d4-8bc9-b634b6f9f296, Name = Axe, Status = New, Quantity = 5, Price = 100,0 }
(...)</code></pre><p><code>OrdersListTest</code>, writing the count of the input list to the console:</p><pre><code>2
3
1
1
5
2
6
0
5
4
11
11
5
9
14
4
19
0
5
4
2
21
19
24
(...)</code></pre><p>As you can see, we have a lot of data generated, and the more data we need to generate, the more useful it becomes.</p><h3>Using generators outside of property&nbsp;tests</h3><p>It is possible to use generators in regular unit tests. It might be useful if you already have a generator and you don&#8217;t want to run it as a PBT (for any reason). However, I feel like there is no reason to do that, because why not just use PBT?</p><pre><code>public class GeneratorInNonPbtTests
{
   [Fact]
    public void NonPbtUnitTest()
    {
        var positiveInt = SimpleTypesGenerators.GetPositiveInt().Sample(numberOfSamples: 1, size: 1).First();
        Assert.True(positiveInt.Value &gt;= 0);
    }
}</code></pre><h3>Properties of&nbsp;tests</h3><p>There are a few properties of the tests that you can use to control the behavior of your tests. I&#8217;ll focus on the two most interesting ones: <code>WithArbitrary</code> and <code>WithReplay</code>:</p><pre><code>prop.Check(Config.QuickThrowOnFailure
    // Specify the generator class for the test
    .WithArbitrary(new[] { typeof(ComplexTypesGenerators) })
    .WithReplay("(6419148108620199268,14754055774596855233)"));</code></pre><p><code>WithArbitrary</code> is used to specify the generator class for the test.</p><p><code>WithReplay</code> is used to replay the test with the same seed that was used to generate the failing data. You can find the seed in the output of the failing test.</p><p>Using attribute on the test method it looks similar:</p><pre><code>[Property(Arbitrary = new[] { typeof(SimpleTypesGenerators) }, Replay = "(6419148108620199268,14754055774596855233)")]</code></pre><h3>Summary</h3><p>I am not using property-based testing daily. For simple functions and simple inputs, it&#8217;s usually easier to write unit tests. But when I have complex functions in which I know that I might have a log of edge cases, here&#8217;s where PBT comes in handy.</p><p>The best part about PBT is that it&#8217;s repeatable randomness. You can always re-run the failed test with the same seed, getting the same input data. However, there are also some downsides. The biggest one I&#8217;ve found is that there is no documentation for FsCheck 3. For me, it was trial and error, and I hope that this article will help you to get started with FsCheck. If you have any troubles, I can recommend looking at the tests of the project itself and playing around with the code&#8202;&#8212;&#8202;I found it helpful.</p><p>Enjoy testing!</p>]]></content:encoded></item><item><title><![CDATA[The ways of working with git]]></title><description><![CDATA[Are you working in a team? Or maybe you are working alone? It does not matter how you work, you should always think about how to improve&#8230;]]></description><link>https://wqplease.com/p/the-ways-of-working-with-git-04260714a932</link><guid isPermaLink="false">https://wqplease.com/p/the-ways-of-working-with-git-04260714a932</guid><dc:creator><![CDATA[Paweł Zelmański]]></dc:creator><pubDate>Sat, 06 Apr 2024 14:19:16 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a83f36e4-7e2d-4584-a45f-bbffb0f73c7c_800x533.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Are you working in a team? Or maybe you are working alone? It does not matter how you work, you should always think about how to improve your process. In this article, I&#8217;m describing different ways of working with git that worked for me in the past. I am also sharing my current way of working with git, which will probably not fit your team, but it might give you some ideas on how to proceed when trying to improve your process. And after that, I have two stories from the past for you about how things were done without git.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!P7Ev!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2033f6f-c73a-465c-94e9-e865d47c3246_800x533.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!P7Ev!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2033f6f-c73a-465c-94e9-e865d47c3246_800x533.jpeg 424w, https://substackcdn.com/image/fetch/$s_!P7Ev!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2033f6f-c73a-465c-94e9-e865d47c3246_800x533.jpeg 848w, https://substackcdn.com/image/fetch/$s_!P7Ev!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2033f6f-c73a-465c-94e9-e865d47c3246_800x533.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!P7Ev!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2033f6f-c73a-465c-94e9-e865d47c3246_800x533.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!P7Ev!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2033f6f-c73a-465c-94e9-e865d47c3246_800x533.jpeg" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f2033f6f-c73a-465c-94e9-e865d47c3246_800x533.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!P7Ev!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2033f6f-c73a-465c-94e9-e865d47c3246_800x533.jpeg 424w, https://substackcdn.com/image/fetch/$s_!P7Ev!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2033f6f-c73a-465c-94e9-e865d47c3246_800x533.jpeg 848w, https://substackcdn.com/image/fetch/$s_!P7Ev!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2033f6f-c73a-465c-94e9-e865d47c3246_800x533.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!P7Ev!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2033f6f-c73a-465c-94e9-e865d47c3246_800x533.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><h3>First step: Git and no code&nbsp;reviews</h3><p>This is the first step. You have a Git repository, but you don&#8217;t know how to use it. You have a single branch, maybe collaborating with someone, but you&#8217;re not using any of the git features besides pull, push, and commit. This is a good starting point. This is the best way to start, but it&#8217;s nonsustainable long-term.</p><h3>Starting small: develop and main&nbsp;branch</h3><p>The next step is to start using branches. Let&#8217;s say you have some application being deployed manually to the production. And by this, I mean that you have to either copy files to the server or click some button in your IDE. You have no CICD pipeline, no automation, etc., but you&#8217;d like still to be able to make a hotfix to production while working on the new features. The flow then would be the following:</p><ul><li><p>you have a <code>main</code> branch, which is always in the state of the production</p></li><li><p>you have a <code>develop</code> branch, which is the branch where you're working on the new features</p></li><li><p>at every moment you can switch from the <code>develop</code> branch to the <code>main</code> branch, make a hotfix, and then switch back to <code>develop</code> without much disruption. You don't have to make sure that the <code>develop</code> works fine, there is no heavy testing required before deploying a hotfix</p></li></ul><h3>Slow but steady: develop, main, and feature&nbsp;branches</h3><p>This is the approach that works pretty well if you have multiple contributors. The flow is similar to the previous one:</p><ul><li><p>you again have a <code>main</code> branch which is in the state of production</p></li><li><p>you have a <code>develop</code> branch that should be working at all times, but it might happen that sometimes it's broken or untested</p></li><li><p>you have feature branches, which you create to develop new features</p></li><li><p>once you finish the feature, you merge it into the <code>develop</code> branch</p></li><li><p>once you have enough changes, you merge the <code>develop</code> branch into the <code>main</code> branch, do some extensive testing, and then deploy <code>main</code> to the production</p></li></ul><p>This approach is good because you push conflicts between the developers into the merge phase. If multiple people are working on the same branch, there is a high chance that they will have to constantly resolve conflicts. Especially if they&#8217;re working on similar areas of the code. The second advantage is that you can review and test features separately, before merging them into the <code>develop</code> branch. This way you need to spend less time on testing the whole application, and if one developer decides to break the build for a week because of a big change (which I don't think is a good approach, but it happens), the other developers can still work on their features without disruptions.</p><h3>Trunk-based development</h3><p>Trunk-based development is the feature branches approach but with a little twist. You have <code>main</code>, <code>develop</code> and feature branches, but the goal is to merge feature branches as soon as possible into the <code>develop</code> branch, and possibly also into the <code>main</code> branch. This way you avoid the long-living branches, which are hard to merge. Of course, you need to make sure that your changes will not affect the production, so usually feature toggles are used to ensure that the code is unreachable, but it's still in the codebase. This approach is good not only because of fewer conflicts but also because the code reviews are smaller, so there is a higher chance of getting the code reviewed right away, without the need to wait days for the review. Have you ever had a situation where you've waited for a week to get your code reviewed? If yes then your process is broken. This is a deeper topic, but one of the ways to try and fix it is to lower the friction on the code reviews. So instead of working for 3 weeks on a single feature, and having 100 files changed, you split your work into small batches of let's say 20 files which you produce over 3 days. This way you might get your code reviewed faster, as the code reviewer will get less anxiety from seeing how many files have changed.</p><h3>Release branches</h3><p>Release branches are useful in two situations:</p><ul><li><p>When you don&#8217;t have an automated CICD pipeline and you want to track when and which version has been deployed to production, or</p></li><li><p>When you have a versioned product, like a desktop application with multiple major versions (version 1, 2, 3, etc.) with long-term support for each of them</p></li></ul><p>In the first case, I&#8217;m sorry for you. In the second one, I&#8217;m also sorry. These two situations are hard to work with. In the first case, you need to track releases in case of rollback or hotfixes&#8202;&#8212;&#8202;it might be that you have merged some changes into main already but you found a bug both on main and production, and these are two different bugs. Then it&#8217;s easier to just checkout the release branch and fix the bug, without the need to worry about the changes which are already in the main branch. This can also be done with tags. It&#8217;s a bit more complicated (just slightly), but the general principle is the same.</p><p>In the second case imagine a scenario in which you have 3 versions of your application that you support&#8202;&#8212;&#8202;versions 4, 5, and 6. And in version 5 you&#8217;ve found a security vulnerability. You need to fix it in versions 5 and 6. Usually how it&#8217;s done is that you fix the bug in version 5 (start with the lowest version you have) and then depending on the changes and if it&#8217;s possible, you either cherry-pick the fix to subsequent versions, or you manually re-apply the changes. You repeat the process for each version having this bug.</p><h3>My current&nbsp;way</h3><p>I&#8217;m currently working in a company where all developers are working mostly on separate services. We have a lot of services and each service is developed by a single developer. The workflow I am using is the following:</p><ul><li><p>I have the <code>main</code> branch, which is the production</p></li><li><p>I have feature branches that are long-living and are merged into the <code>main</code> branch once they're done</p></li><li><p>Feature branches take a lot of time to develop&#8202;&#8212;&#8202;sometimes it happens that it&#8217;s even a week or two</p></li><li><p>If I have a feature taking more than 2 weeks, I usually split the work into smaller feature branches, trying to deliver some value as soon as possible, slowly expanding the feature</p></li></ul><p>Of course, this approach has its downsides. The biggest one is that code reviews are sometimes massive. It&#8217;s pretty usual for us to have 100 or even 300 files changed in a single code review. But on the other hand, the process is lightweight. We need to review these 300 files anyway, so why not batch them together and do one single round of review instead of 10 rounds of 30 files each? What helps a lot is the fact that there is most of the time a single person working on the service, so we don&#8217;t have to worry about conflicts.</p><h3>Summary</h3><p>The process of code reviews is very team- and codebase-dependent. There will be a different process for a team working on a monolith, and possibly a different one for a team working on microservices. What I am looking for while working on the improvements of the process is simplicity. The process should be as lightweight as possible. But sometimes it&#8217;s just not possible to have a simple process. But always try to make it just a bit simpler.</p><h3>Bonus: Stories from the&nbsp;past</h3><h3>Welcome to the 90s: SVN and no code&nbsp;reviews</h3><p>As I&#8217;m typing this, I feel old. However, I don&#8217;t consider myself old. I started my professional career in an old-style company, writing Delphi code stored in SVN. The approach of SVN is different than the one used by Git&#8202;&#8212;&#8202;instead of being distributed, it&#8217;s centralized. This means that two people can&#8217;t edit the same file at the same time. It was enforced by locks&#8202;&#8212;&#8202;everyone could lock the file at the start of the work, and then release the lock after doing the changes. As I think of it right now, I find it pretty funny that I needed to walk to my coworker&#8217;s desk and ask him when he was done with the file so that he&#8217;d release the lock on the file and I could edit it. I remember that when I was opening any file with an active lock on it, I was getting a message that the file was locked by someone else. I was able to see who and when locked it. The worst part was when someone was on vacation. Then we had to do the ancient way of merging changes&#8202;&#8212;&#8202;I could steal the lock, and then the person had to copy their changes to the separate file, download my changed file, and then manually apply their changes from the separate file. No one thought about any form of code review. It was just chaos.</p><h3>Version Control From Hell: Dropbox as a version control&nbsp;system</h3><p>I once had the pleasure of working with a self-taught programmer who did not need to collaborate with anyone. He used Dropbox as his version control system. As he implemented the feature, or after some time, he just zipped the whole project and sent it to the Dropbox account. And to avoid paying for the storage, he had dozens of accounts with free tiers. He had the whole system of managing this whole mess and I&#8217;m impressed that it worked for him&#8202;&#8212;&#8202;he knew what is happening and when he had any problems, he was able to restore the old version. But enough of the funny stories from the past, let&#8217;s move on to present times.</p>]]></content:encoded></item><item><title><![CDATA[JSON Unexpected Naming Convention]]></title><description><![CDATA[A short story on how "1" is a valid JSON key]]></description><link>https://wqplease.com/p/json-unexpected-naming-convention</link><guid isPermaLink="false">https://wqplease.com/p/json-unexpected-naming-convention</guid><dc:creator><![CDATA[Paweł Zelmański]]></dc:creator><pubDate>Sat, 23 Mar 2024 13:43:10 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/04b5b7c4-b371-4259-a23c-de24b8f4da50_5760x3540.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Recently I've encountered an interesting problem. My task was to deserialize a JSON file which I at first assumed was invalid. The file looked like this:</p><pre><code><code>{
    "1": {
        "name": "Erik",
        "age": 30
    },
    "2": {
        "name": "Jane",
        "age": 25
    },
    "3": {
        "name": "Alex",
        "age": 35
    }
}
</code></code></pre><p>Since it was not an internal file but an external one over which I had no control, at first I got a bit frustrated and started to look for an opportunity to say "This file is not following the specification, fix it!". I mean, just look at this! Have you seen a JSON file with keys being numbers? How am I supposed to access this data? My first thought was that it was invalid JSON because the keys need to be a valid variable name. And I don't know any programming language that allows for a variable name to start with a number, not even mentioning that it's a single-digit number.</p><p>So I went to the Internet browser and started reading the JSON specification. This is what I've found:</p><pre><code><code>Member Names

Implementation and profile defined member names used in a JSON:API document MUST be treated as case sensitive by clients and servers, and they MUST meet all of the following conditions:

    Member names MUST contain at least one character.
    Member names MUST contain only the allowed characters listed below.
    Member names MUST start and end with a &#8220;globally allowed character&#8221;, as defined below.

To enable an easy mapping of member names to URLs, it is RECOMMENDED that member names use only non-reserved, URL safe characters specified in RFC 3986.
Allowed Characters

The following &#8220;globally allowed characters&#8221; MAY be used anywhere in a member name:

    U+0061 to U+007A, &#8220;a-z&#8221;
    U+0041 to U+005A, &#8220;A-Z&#8221;
    U+0030 to U+0039, &#8220;0-9&#8221;
    U+0080 and above (non-ASCII Unicode characters; not recommended, not URL safe)

Additionally, the following characters are allowed in member names, except as the first or last character:

    U+002D HYPHEN-MINUS, &#8220;-&#8220;
    U+005F LOW LINE, &#8220;_&#8221;
    U+0020 SPACE, &#8220; &#8220; (not recommended, not URL safe)
</code></code></pre><p>This was followed by a long list of illegal characters. But let's slowly break it down, shall we?</p><blockquote><p>Member names MUST contain at least one character Okay, so one character key is allowed. Fine, I've already seen variables named&nbsp;<code>x</code>&nbsp;or&nbsp;<code>y</code>&nbsp;in the code, so I can live with that.</p><p>Member names MUST contain only the allowed characters listed below Member names MUST start and end with a &#8220;globally allowed character&#8221;, as defined below. Easy, still no problem. They are a-z, A-Z, right?</p><p>U+0061 to U+007A, &#8220;a-z&#8221; U+0041 to U+005A, &#8220;A-Z&#8221; U+0030 to U+0039, &#8220;0-9&#8221;</p></blockquote><p>Okay, now I'm confused. But there are characters not allowed in the first and last character, it cannot be that the name can start with a number, right?</p><p>Hell no! It just cannot be a minus, underscore, and space. But it can be a number. I was sitting shocked. How am I supposed to deserialize it into any object?</p><h2><strong>The realisation</strong></h2><p>I was sitting shocked. How am I supposed to deserialize it into any object? I've never seen a JSON file like this before. I started googling it and the solution is simple. It's a dictionary. It's a dictionary with keys being numbers. And it's a valid JSON file. It's a valid JSON file that I can deserialize into a dictionary.</p><p>So basically such JSON can be deserialized into a&nbsp;<code>Dictionary&lt;string, MyObject&gt;</code>&nbsp;where&nbsp;<code>MyObject</code>&nbsp;is a class with&nbsp;<code>name</code>&nbsp;and&nbsp;<code>age</code>&nbsp;in this case. My main mistake in the whole thing is that usually if there is a list of objects, it's an array, not a dictionary, and within this array, I'm used to having objects with keys being strings, not numbers.</p><p>I feel like this might be an obvious thing for front-end developers, but I'm not one of them. I'm a backend developer. Even though I now know the answer, I'm still shocked that "1" and "2" are valid keys in JSON.</p><p></p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://wqplease.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Pawe&#322;&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[My infrastructure journey — part 1]]></title><description><![CDATA[A couple of months ago I decided to give it a try and get into self-hosted. The problem is that I&#8217;m not the best in terms of this &#8212; it&#8217;s&#8230;]]></description><link>https://wqplease.com/p/my-infrastructure-journey-part-1-e1d059c8cce1</link><guid isPermaLink="false">https://wqplease.com/p/my-infrastructure-journey-part-1-e1d059c8cce1</guid><dc:creator><![CDATA[Paweł Zelmański]]></dc:creator><pubDate>Sat, 23 Mar 2024 07:39:45 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/e52d5c60-66cc-4b46-b140-f1ab1ec3cd6c_800x533.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Are you bad at infrastructure? I also used to be. A couple of months ago I decided to give it a try and get into self-hosted and oh boy, it&#8217;s been an amazing and rewarding journey! Let me share my story with you.</p><p>It&#8217;s hard for me to think in terms of infrastructure. I&#8217;m bad at bash. I&#8217;m bad at networking. I&#8217;m bad at everything that is needed to make it work. At least this is what I was thinking for multiple years. And right now slowly I&#8217;ve started convincing myself that the infrastructure work is not only not that hard, but also can be pretty enjoyable.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!m-63!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7064778d-4feb-4517-badd-dc22b6a6f8c7_800x533.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!m-63!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7064778d-4feb-4517-badd-dc22b6a6f8c7_800x533.jpeg 424w, https://substackcdn.com/image/fetch/$s_!m-63!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7064778d-4feb-4517-badd-dc22b6a6f8c7_800x533.jpeg 848w, https://substackcdn.com/image/fetch/$s_!m-63!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7064778d-4feb-4517-badd-dc22b6a6f8c7_800x533.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!m-63!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7064778d-4feb-4517-badd-dc22b6a6f8c7_800x533.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!m-63!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7064778d-4feb-4517-badd-dc22b6a6f8c7_800x533.jpeg" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7064778d-4feb-4517-badd-dc22b6a6f8c7_800x533.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!m-63!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7064778d-4feb-4517-badd-dc22b6a6f8c7_800x533.jpeg 424w, https://substackcdn.com/image/fetch/$s_!m-63!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7064778d-4feb-4517-badd-dc22b6a6f8c7_800x533.jpeg 848w, https://substackcdn.com/image/fetch/$s_!m-63!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7064778d-4feb-4517-badd-dc22b6a6f8c7_800x533.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!m-63!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7064778d-4feb-4517-badd-dc22b6a6f8c7_800x533.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p>I&#8217;ve always had problems with the infrastructure, hosting, and making something that can be used by me or someone else. Since I can remember I always wanted to become a backend developer. I was never attracted by frontend work. I did not like database work at first. I hated it. In my 2nd year of university, I even decided that no one would ever force me to use this awful technology which is databases. Reality hit me hard pretty quickly, but I still did not enjoy database work. I think it might have bad tooling&#8202;&#8212;&#8202;most DB tools are pure monstrosities taken right from the 90s.</p><p>Also, any frontend work was a problem for me&#8202;&#8212;&#8202;I just did not enjoy it. Nothing too personal, it was just boring.</p><p>At multiple points in my career, I was a generalist, a one-man army. I was able to just fill the shoes of any position that was required at that moment. If there was a need for frontend developer, I was a poor frontend developer. If there was a need to record and process some promotional video, conduct training, or write long documentation for a project, I was able to do it all, and usually, I was not that unsatisfied about it. I even at some point was responsible for publishing the backend application I&#8217;ve maintained into the internet. I&#8217;m calling it this strange way because it had nothing to do with infrastructure or hosting&#8202;&#8212;&#8202;I just created a web application in Azure and then clicked the &#8216;publish&#8217; button in Visual Studio. And that was it.</p><p>Later on, I had an opportunity to work with hosting on virtual machines. It was horrible as I think of it right now&#8202;&#8212;&#8202;we had a virtual machine with IIS installed, and deployment was manual&#8202;&#8212;&#8202;we just generated a&nbsp;.zip package using pipeline and we had to grab this package, upload it by hand into a virtual machine, and then replace the folder which contained production application. At some point I got fluent in it, however, I still did not feel like it&#8217;s infrastructure work. It was some Windows Server 2012, and using a remote desktop I could click out most of the things. At that time I started a bit of powershell scripting to automate some processes. But still, there was some &#8216;mysterious end boss&#8217; over there, which was cloud. Scary sounding names, like &#8216;EC2&#8217;, &#8216;S3&#8217;, &#8216;VPS&#8217;, &#8216;Proxy&#8217;, etc. And the entry barrier was a bit too high for me. I was afraid. As I think of it right now, the biggest problem for me back then was that I had no idea what I was doing. Once I&#8217;ve created an account on AWS, and started following some tutorials on how to deploy the &#8216;hello world&#8217; application to the cloud. And the thing that stopped me from progressing at some point was just stupid&#8202;&#8212;&#8202;I managed to create something by mistake which started to cost me money. As I created the account, they promised that I wouldn&#8217;t have to pay and there would be no costs!</p><p>It even wasn&#8217;t about the money, because it was somewhere around 1$, it was about the possibility of failure&#8202;&#8212;&#8202;it was so easy to make a wrong step and be charged for things that I did not need. I just wanted to play around with things, and I ended up in a minefield where every step could cost me money, and also there was no good way of monitoring what was happening. I could not set up triggers like &#8216;if I exceed 10$/mo, just stop everything, delete all resources, and send me an email&#8217;.</p><p>Then I joined a company where we use cloud technologies, mostly Azure stack. I&#8217;ve learned a bunch of things in practice. I&#8217;ve learned to use docker. I&#8217;ve touched Kubernetes. I used things like queues, table storage, and S3. And this all from a user perspective&#8202;&#8212;&#8202;I did not have to worry about hosting all things. I did not have to worry about managing Kubernetes, pipelines, etc. This all was managed by other people. This gave me a good opportunity to slowly learn how to use it, and how to approach things. The thing I&#8217;m the most happy about is that I&#8217;ve learned Kubernetes. Many terms might sound scary at first: deployment, job, cronjob, service, ingress, etc. But since other people were managing it, and I also had colleagues willing to help me and explain to me how to use this whole thing, it did not sound that scary.</p><p>And then after some time, as I got comfortable with just using the cloud, I started to think that it would be nice to be able to jump a bit deeper into the infrastructure. I&#8217;ve started to think about self-hosting. I had (and still have) a mental barrier from buying a VPS because I&#8217;m afraid I will just get demotivated and stop using it. I think this fear is irrational, but still exists. I don&#8217;t feel like fully committing with my money and betting that I will not get bored in a week. I don&#8217;t like wasting money.</p><p>I have started small, the way I&#8217;m comfortable: I took my old laptop I was not using anymore, and installed Ubuntu on it. I&#8217;ve been always afraid of Linux and bash, but a few months prior I started using WSL and it helped me a lot&#8202;&#8212;&#8202;I got a bit more comfortable with bash, it gave me a slow and safe intro to dotfiles, shell configuration and overall managing of Linux. So I managed to take a first step&#8202;&#8212;&#8202;I installed Minikube on my Ubuntu, and I was able to access some sample application from another computer inside of my local network. And that&#8217;s it. Over there I&#8217;ve stopped again. The reasons were multiple: I had no meaningful application to host, and I had problems with high ping locally (I have a ping of 50&#8211;300ms between two laptops in the same room, I suspect that it&#8217;s caused by the radio antenna which is just above my apartment), and I was again afraid. Afraid that if I would expose my laptop to the internet since I have no idea what I&#8217;m doing, I will get hacked in a matter of minutes. So I had a working cluster, and this was it when it came to the infrastructure for a couple of months.</p><p>A couple of days ago I came up with a brilliant idea. My fiance goes to the gym with her friend, and they attend some organized classes together. The problem is that the registration for those classes opens sometime 2 days before the classes (the exact time of opening is not disclosed) and more people want to attend than there are free spots. So my fiance and her friend had to go into the website and refresh it from time to time during the day if they wanted to sign up. And then I realized that I have all that I need&#8202;&#8212;&#8202;I have skills to write a crawler, I have a Kubernetes cluster which I don&#8217;t even need to expose to the internet, and then I&#8217;ll be able to solve an actual real-life problem, and also have a good excuse to make another take on the infrastructure.</p><p>So I took my old laptop, which still had Minikube installed, turned it on, and checked if everything was still working. And it was! So my first step was to create a PostgreSQL database, hosted on my local Kubernetes. It was not required for the task (just a little nice to have for the near future), but I&#8217;ve decided to use this small spark of motivation to make it happen. The crawler part and hosting it was the easy part, relative to what I thought was waiting for me with this database. And I was partially right&#8202;&#8212;&#8202;I had some problems. I had to go manually onto the machine with Kubernetes and clean up some corrupted files after failed attempts. But I&#8217;m happy with that because not only I&#8217;ve managed to do it, but also I&#8217;ve learned a few things. This way not only do I get more comfortable with working with Linux, but also I get more confidence that I am capable of fixing things by myself.</p><p>Right now the status of everything is:</p><ul><li><p>I have a Kubernetes cluster running on my old laptop&#8202;&#8212;&#8202;I have a PostgreSQL database running on this cluster&#8202;&#8212;&#8202;I have a&nbsp;.NET crawler deployed to the cluster, which sends emails once there is a free spot on the class</p></li></ul><p>What I am working on right now, and the plan for the future:</p><ul><li><p>I want to expose the cluster to the internet, using some kind of proxy / tunnel / reverse proxy (preferably a free one, I&#8217;m exploring options using Cloudflare right now)&#8202;&#8212;&#8202;I want the crawler to use the database to store and read some data&#8202;&#8212;&#8202;I want to make some frontend for the crawler which will be exposed publicly to the internet</p></li></ul><p>So far I&#8217;d say that I&#8217;m not only happy with my progress, but also proud of myself that I am capable of working with infrastructure and it&#8217;s not as hard as I thought it was. I&#8217;m looking forward to the next steps, and I&#8217;m excited to see where this journey will take me.</p>]]></content:encoded></item><item><title><![CDATA[Integration testing of business logic using dockerized PostgreSQL with Test Containers in C#]]></title><description><![CDATA[Testing is hard. Especially writing good tests. In this article, I&#8217;ll show you a simple way of creating integration tests with the real&#8230;]]></description><link>https://wqplease.com/p/integration-testing-of-business-logic-using-dockerized-postgresql-with-test-containers-in-c-2d1e4e617493</link><guid isPermaLink="false">https://wqplease.com/p/integration-testing-of-business-logic-using-dockerized-postgresql-with-test-containers-in-c-2d1e4e617493</guid><dc:creator><![CDATA[Paweł Zelmański]]></dc:creator><pubDate>Sun, 10 Mar 2024 19:59:46 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/26140e20-dc60-47aa-88d9-b27d46cb8310_800x534.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Testing is hard. Especially writing good tests. In this article, I&#8217;ll show you a simple way of creating integration tests with the real database. In my case it&#8217;s applied on a cronjob (so a regular console application), but with small modifications it should be also easily applicable into other application types.</p><p><em><a href="https://github.com/pzelmanski/cronjob-testing">Here</a> you can find the whole project on GitHub</em></p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Byjx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff82417d6-8c4f-4379-b3c2-5b3b64e71596_800x534.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Byjx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff82417d6-8c4f-4379-b3c2-5b3b64e71596_800x534.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Byjx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff82417d6-8c4f-4379-b3c2-5b3b64e71596_800x534.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Byjx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff82417d6-8c4f-4379-b3c2-5b3b64e71596_800x534.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Byjx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff82417d6-8c4f-4379-b3c2-5b3b64e71596_800x534.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Byjx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff82417d6-8c4f-4379-b3c2-5b3b64e71596_800x534.jpeg" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f82417d6-8c4f-4379-b3c2-5b3b64e71596_800x534.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Byjx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff82417d6-8c4f-4379-b3c2-5b3b64e71596_800x534.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Byjx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff82417d6-8c4f-4379-b3c2-5b3b64e71596_800x534.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Byjx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff82417d6-8c4f-4379-b3c2-5b3b64e71596_800x534.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Byjx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff82417d6-8c4f-4379-b3c2-5b3b64e71596_800x534.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><h3>Business logic &amp; application</h3><p>Imagine you have a pretty simple cronjob to create. It&#8217;s a small console application which grabs the data from one table in the database, transforms it, and then inserts the data back into the database. In our case, it will be logs. We have a table into which raw logs are being collected. You can think of it as of some automatically gathered logs of all the events happening in the production cluster, which needs to be filtered and cleaned up in order to store &amp; browse them long term, to save the space.</p><pre><code>create table raw_logs
(
    id uuid primary key,
    message text not null,
    severity int not null,
    source text not null,
    timestamp timestamp not null,
    additional_field1 text,
    additional_field2 text
);</code></pre><p>Most of the fields from the <code>raw_logs</code> table are not interesting for us, we only care about <code>message</code> and <code>severity</code>. Message is just a message associated with log, and severity is enum stored as int values, being:</p><pre><code>Info = 1,
Warning = 2,
Error = 3,</code></pre><p>We have a task to filter and transform the data, so that another application can process it without knowing the exact details of the raw logs. The rules are as following:</p><ul><li><p>if log is info, we don&#8217;t care about it</p></li><li><p>if log is warning, we only care about the amount of such logs</p></li><li><p>if log is error, we care about its message</p></li></ul><p>And we want to save the result into another PostgreSQL table. So let&#8217;s get into implementation of the C# application. Foremost, we need a way to read raw logs from the DB. We need a DB DTO:</p><pre><code>public class RawLog
{
    public readonly Guid Id;
    public readonly string Message;
    public readonly Severity Severity;
}</code></pre><pre><code>private  RawLog(Guid id, string message, Severity severity)
{
    Id = id;
    Message = message;
    Severity = severity;
}</code></pre><p>And a function reading the logs in DatabaseReader class:</p><pre><code>public async Task&lt;List&lt;RawLogDto&gt;&gt; ReadRawData()
{
    string sql = $"select * from raw_logs";
    var query = await _connection.QueryAsync&lt;RawLogDto&gt;(sql);
    return query.ToList();
}</code></pre><p>So now we have the raw logs, now we want to filter out only the ones we care about&#8202;&#8212;&#8202;warnings and errors. We don&#8217;t want to operate on DB Dtos so in this step we also map DB Dtos into another objects. We do this in the <code>Worker</code> class, which will be the main point of our logic.</p><pre><code>private static IEnumerable&lt;ITransformResult&gt; Transform(List&lt;RawLogDto&gt; data)
    {
        var logs = data
            .Select(RawLog.FromDto)
            .Select(x =&gt;
            {
                ITransformResult result = x.Severity switch
                {
                    Severity.Unknown =&gt; throw new InvalidOperationException("Unknown log severity"),
                    Severity.Info =&gt; new DropTransformResult(),
                    Severity.Warning =&gt; new WarningTransformResult(),
                    Severity.Error =&gt; new ErrorTransformResult(x.Id, x.Message),
                    _ =&gt; throw new ArgumentOutOfRangeException()
                };
                return result;
            });
        return logs;
    }</code></pre><p>Not only that, but we also need to add mapping function from DB Dto to business logic object:</p><pre><code>public static RawLog FromDto(RawLogDto dto)
{
    return new RawLog(dto.Id, dto.Message, (Severity) dto.Severity);
}</code></pre><p>Now as we have transformed logs, we want to convert them into the final DB-writable DTOs in a final form, so: if the log is error we want each individual messages, if it&#8217;s warning we only want count.</p><pre><code>private static IEnumerable&lt;LogWriteDbDto&gt; ToDbDto(IEnumerable&lt;ITransformResult&gt; data)
{
    foreach (var d in data)
    {
        switch (d)
        {
            case ErrorTransformResult e:
                yield return new LogWriteDbDto(Severity.Error, e.Message);
                break;
            case WarningTransformResult:
                yield return new LogWriteDbDto(Severity.Warning, null);
                break;
            case DropTransformResult:
                break;
            default:
                throw new InvalidOperationException("Unknown worker result");
        }
    }
}</code></pre><p>And we also need to have the <code>LogWriteDbDto</code>:</p><pre><code>public class LogWriteDbDto(Severity Severity, string? Message)
{
    public Guid Id { get; init; } = Guid.NewGuid();
    public Severity Severity { get; init; } = Severity;
    public string? Message { get; init; } = Message;
}</code></pre><p>Finally, as we have the logs transformed and processed, we need to have a <code>DatabaseWriter</code>:</p><pre><code>public async Task WriteTransformedLogs(List&lt;LogWriteDbDto&gt; dto)
{
    var sql = "INSERT INTO transformed_logs (id, severity, message) VALUES (@id, @severity, @message)";
    await _connection.ExecuteAsync(sql, dto);
}</code></pre><p>With the writer, here comes the SQL table:</p><pre><code>create table transformed_logs
(
    id uuid primary key,
    message text,
    severity int not null
);</code></pre><p>And now as we have all the pieces, we can patch them up all together:</p><pre><code>public async Task DoAsync()
{
    var data = await _databaseReader.ReadRawData();
    var transformed = Transform(data);
    var dbDtos = ToDbDto(transformed);
    await _databaseWriter.WriteTransformedLogs(dbDtos.ToList());
}</code></pre><p>And then in <code>Program.cs</code> we can have all setup, configuration, HostBuilder, etc., and ultimately we have one entry point: <code>Worker.DoAsync()</code>, which is responsible for all the business logic.</p><p>In my case, the whole <code>Program.cs</code> looks like this:</p><pre><code>// Connection string of local databse, hosted on inside Docker
const string connectionString = "Host=localhost:5432;Username=postgres;Password=admin;Database=postgres";
await using var databaseReader = new DatabaseReader(connectionString);
await using var databaseWriter = new DatabaseWriter(connectionString);
var worker = new Worker(databaseReader, databaseWriter);
await worker.DoAsync();

Console.WriteLine("Goodbye");</code></pre><h3>Testing</h3><p>Now as we have the application working, we can get into the testing part. Although testing the whole application would be nice, I think it would be way more complicated than just testing the <code>Worker.DoAsync()</code> method, using the actual PostgreSQL database.</p><p>First, we need to be able to spin up the PostgreSQL container. For this I&#8217;m using the <code>TestContainers</code> and <code>TestContainers.PostgreSql</code> libraries:</p><pre><code>public PostgreSqlContainer Get()
{
    return new PostgreSqlBuilder()
        .WithImage("postgres:16")
        .WithCleanUp(true)
        .Build();
}</code></pre><p>As we have the container, we can already use it in our integration test:</p><pre><code>[Fact]
public async Task LogsTransformationTest()
{
    // Arrange
    var container = new TestContainerFactory().Get();
    await container.StartAsync();
}</code></pre><p>As a prerequisite to these tests, we need to have a working DB with the schema identical to the production one. We can achieve this by storing the whole schema in a <code>schema.sql</code> file, for the ease of use I've put it into test's project, however it's possible to add it as a solution folder so that it's accessible from multiple projects. In our case, <code>schema.sql</code> is pretty short as it just creates two tables:</p><pre><code>create table raw_logs
( .. );</code></pre><pre><code>create table transformed_logs
( .. );</code></pre><p>And we also need to have some test data in the <code>raw_logs</code> table. I have a second file, called <code>data.sql</code>:</p><pre><code>insert into raw_logs (id, message, severity, source, timestamp, additional_field1, additional_field2) values ('c154b819-3e7b-4fa8-bf8b-5de0b7e35d3f', 'Info log 1', 1, 'Source 1', '2024-03-08 12:00:00', 'Value 1', 'Value 2');
insert into raw_logs (id, message, severity, source, timestamp, additional_field1, additional_field2) values ('f6a4dc05-2b70-44b6-90cb-238bb6f19a93', 'Warning log 1', 2, 'Source 2', '2024-03-08 12:10:00', 'Value 3', 'Value 4');
insert into raw_logs (id, message, severity, source, timestamp, additional_field1, additional_field2) values ('ecb27850-1a6c-4b2d-aa30-63f94ff541f8', 'Error log 1', 3, 'Source 3', '2024-03-08 12:20:00', 'Value 5', 'Value 6');
insert into raw_logs (id, message, severity, source, timestamp, additional_field1, additional_field2) values ('bae4b6b2-8e46-4224-967a-40528ed06a68', 'Info log 2', 1, 'Source 4', '2024-03-08 12:30:00', 'Value 7', 'Value 8');
insert into raw_logs (id, message, severity, source, timestamp, additional_field1, additional_field2) values ('a8cfdb45-d6c1-4814-a65c-58d942f52a69', 'Warning log 2', 2, 'Source 5', '2024-03-08 12:40:00', 'Value 9', 'Value 10');
insert into raw_logs (id, message, severity, source, timestamp, additional_field1, additional_field2) values ('c3bf6d44-515d-4945-b3b7-810620fb3e82', 'Error log 2', 3, 'Source 6', '2024-03-08 12:50:00', 'Value 11', 'Value 12');
insert into raw_logs (id, message, severity, source, timestamp, additional_field1, additional_field2) values ('cfd5dab5-1342-47b8-8c96-04ec5e5795ec', 'Info log 3', 1, 'Source 7', '2024-03-08 13:00:00', 'Value 13', 'Value 14');
insert into raw_logs (id, message, severity, source, timestamp, additional_field1, additional_field2) values ('c68b5e19-1097-4ec3-967e-67190d22c3f5', 'Warning log 3', 2, 'Source 8', '2024-03-08 13:10:00', 'Value 15', 'Value 16');
insert into raw_logs (id, message, severity, source, timestamp, additional_field1, additional_field2) values ('eddb52b0-70d7-45b3-9fb4-c40ddbf5ec9b', 'Error log 3', 3, 'Source 9', '2024-03-08 13:20:00', 'Value 17', 'Value 18');
insert into raw_logs (id, message, severity, source, timestamp, additional_field1, additional_field2) values ('1b9e8608-fa0b-4b4c-98cf-96b84c4bf8a6', 'Info log 4', 1, 'Source 10', '2024-03-08 13:30:00', 'Value 19', 'Value 20');
insert into raw_logs (id, message, severity, source, timestamp, additional_field1, additional_field2) values ('b01d1e47-dcb4-472f-b6f4-7b20b14e2b70', 'Warning log 4', 2, 'Source 11', '2024-03-08 13:40:00', 'Value 21', 'Value 22');
insert into raw_logs (id, message, severity, source, timestamp, additional_field1, additional_field2) values ('e4c5245b-8631-4f1e-93ee-42b2e87eab07', 'Error log 4', 3, 'Source 12', '2024-03-08 13:50:00', 'Value 23', 'Value 24');
insert into raw_logs (id, message, severity, source, timestamp, additional_field1, additional_field2) values ('3dfe7d9d-dcaa-438f-9a2d-8e10962886b0', 'Info log 5', 1, 'Source 13', '2024-03-08 14:00:00', 'Value 25', 'Value 26');
insert into raw_logs (id, message, severity, source, timestamp, additional_field1, additional_field2) values ('293fb847-9cfb-4a4c-b22d-6f2a1b319b16', 'Warning log 5', 2, 'Source 14', '2024-03-08 14:10:00', 'Value 27', 'Value 28');
insert into raw_logs (id, message, severity, source, timestamp, additional_field1, additional_field2) values ('f2430cf6-1c6e-4343-9b73-b3b594b62814', 'Error log 5', 3, 'Source 15', '2024-03-08 14:20:00', 'Value 29', 'Value 30');</code></pre><p>Now we need to read&nbsp;<code>.sql</code> files and apply them onto container:</p><pre><code>public async Task ExecuteScriptFromFile(string connectionString, string path)
{
    var fileText = await FileReader.ReadFile(path);
    await using var connection = new NpgsqlConnection(connectionString);
    await connection.OpenAsync();
    await connection.ExecuteAsync(fileText);
}</code></pre><pre><code>[Fact]
public async Task LogsTransformationTest()
{
    ...
    var testDbWriter = new TestDbWriter();
    string schemaPath = @"./DatabaseScripts/schema.sql";
    string dataPath = @"./DatabaseScripts/data.sql";
    await testDbWriter.ExecuteScriptFromFile(container.GetConnectionString(), schemaPath);
    await testDbWriter.ExecuteScriptFromFile(container.GetConnectionString(), dataPath);
}</code></pre><p>Now we can add worker factory and call it from test:</p><pre><code>public static class WorkerFactory
{
    public static Worker Get(string connectionString)
    {
        var databaseReader = new DatabaseReader(connectionString);
        var databaseWriter = new DatabaseWriter(connectionString);
        return new Worker(databaseReader, databaseWriter);
    }
}</code></pre><pre><code>[Fact]
public async Task LogsTransformationTest()
{
    ...
    var worker = WorkerFactory.Get(container.GetConnectionString());
    // Act
    await worker.DoAsync();
}</code></pre><p>Now we have all the work done, and the results should be waiting for us in db. So we need test db reader for the assertions:</p><pre><code>public class TestDbReader
{
    public async Task&lt;List&lt;TransformedLogs&gt;&gt; ReadAllLogs(string connectionString)
    {
        await using var connection = new NpgsqlConnection(connectionString);
        await connection.OpenAsync();
        var logs = await connection.QueryAsync&lt;TransformedLogs&gt;("SELECT * FROM transformed_logs");
        return logs.ToList();
    }
}</code></pre><pre><code>public class TransformedLogs
{
    public Guid Id { get; init; }
    public string? Message { get; init; }
    public Severity Severity { get; init; }
}</code></pre><p>Going back to the test, we can use newly created DbReader to assert that worker did its work as expected:</p><pre><code>[Fact]
public async Task LogsTransformationTest()
{
    ...
    // Assert
    var logs = await new TestDbReader().ReadAllLogs(container.GetConnectionString());
    logs.Should().HaveCount(10);
    logs.Where(x =&gt; x.Severity is Severity.Warning).Should().HaveCount(5);
    logs.Where(x =&gt; x.Severity is Severity.Error).Should().HaveCount(5);
    logs.Where(x =&gt; x.Severity is Severity.Error).Select(x =&gt; x.Message).All(x =&gt; x is null).Should().BeFalse();</code></pre><p>I&#8217;m here using <code>FluentAssertions</code> library to improve the asserts - instead of</p><pre><code>Assert.Equals(5, logs.Where(x =&gt; x.Severity is Severity.Warning));</code></pre><p>I can write</p><pre><code>logs.Where(x =&gt; x.Severity is Severity.Warning).Should().HaveCount(5);</code></pre><p>And now, as a final touch, we should clean up the test container:</p><pre><code>[Fact]
public async Task LogsTransformationTest()
{
    ...
    // Cleanup
    await container.DisposeAsync(); 
}</code></pre><p>So the whole test looks like this:</p><pre><code>[Fact]
public async Task LogsTransformationTest()
{
    // Arrange
    var container = new TestContainerFactory().Get();
    await container.StartAsync();
    var worker = WorkerFactory.Get(container.GetConnectionString());</code></pre><pre><code>    var testDbWriter = new TestDbWriter();
    string schemaPath = @"./DatabaseScripts/schema.sql";
    string dataPath = @"./DatabaseScripts/data.sql";
    await testDbWriter.ExecuteScriptFromFile(container.GetConnectionString(), schemaPath);
    await testDbWriter.ExecuteScriptFromFile(container.GetConnectionString(), dataPath);</code></pre><pre><code>    // Act
    await worker.DoAsync();</code></pre><pre><code>    // Assert
    var logs = await new TestDbReader().ReadAllLogs(container.GetConnectionString());
    logs.Should().HaveCount(10);
    logs.Where(x =&gt; x.Severity is Severity.Warning).Should().HaveCount(5);
    logs.Where(x =&gt; x.Severity is Severity.Error).Should().HaveCount(5);
    logs.Where(x =&gt; x.Severity is Severity.Error).Select(x =&gt; x.Message).All(x =&gt; x is null).Should().BeFalse();</code></pre><pre><code>    // Cleanup
    await container.DisposeAsync();
}</code></pre><h3>Conclusion</h3><p>I think this way of testing provides a nice balance between the complexity of test and level of confidence provided. Running this test takes only a couple seconds and the best thing about it is that it tests the whole depth of the business part of the application&#8202;&#8212;&#8202;it tests not only business rules, but also db queries, SQL writes etc. It&#8217;s possible to test if your DB implementations do not struggle with multiple instances&#8202;&#8212;&#8202;you can easily create 10 workers and execute them in parallel to test if there will be no errors while having 10 or even 100 instances trying to run at the same time. It&#8217;s invaluable to be able to test how multiple instances behave locally, and to do it each time you run your test suite. Having integration tests done this way, you don&#8217;t need to have additional infrastructure to run db tests in your pipelines, which provides better scaling of your tests. Also, by creating a new DB each time, you are avoiding any problems with parallel test runs on the same db, resulting in a broken state&#8202;&#8212;&#8202;tests are more consistent and separated. This technique has served me well, and I hope you will also find it useful.</p>]]></content:encoded></item></channel></rss>