[{"content":"It has been almost six years since my last post here. Six years! When I wrote about joining Facebook back in 2020, I had no idea it would be this long before I\u0026rsquo;d write again. Life got busy, priorities shifted, and the blog quietly went to sleep.\nWell, it\u0026rsquo;s waking up now.\nWhat happened since 2020 A lot has changed. Facebook became Meta, and I moved from software engineering into engineering management. I currently run a team in Meta\u0026rsquo;s Superintelligence Labs (MSL), where we build the data infrastructure behind Meta\u0026rsquo;s generative AI efforts - web crawlers, synthetic data pipelines, and infrastructure for AI agents.\nI have to be honest - I am absolutely loving it. Working at the intersection of large-scale distributed systems and AI is exactly the kind of challenge I was looking for. The pace is intense, the problems are fascinating, and the people I work with are some of the best I have ever worked with.\nThe transition from IC to manager was its own journey. I still remember being quite nervous about giving up the daily coding. What I discovered is that the leadership skills I wrote about on this blog years ago - motivating teams, listening to others, staying humble - turned out to be far more useful than I expected. Funny how that works.\nWhy I\u0026rsquo;m writing again Two reasons, really.\nFirst, I have been building a bunch of side projects recently and I want to share what I\u0026rsquo;m learning. I\u0026rsquo;m running The Dice Drop - a board games website and Twitter bot, Indie Game Drop - an indie game discovery platform, and BattleCast - a D\u0026amp;D combat simulator that is just getting started. Building these has taught me a lot about working with AI tools, deploying static sites, running automated content pipelines, and all sorts of things that I think are worth sharing.\nSecond, the AI wave has changed everything. When I started this blog in 2018, I was writing about Spring Boot and microservices. Those topics are still relevant, but the world has moved on. I now work with LLMs every day, both at Meta and in my side projects. I use Claude Code to build prototypes, I use AI assistants to manage my workflows, and I\u0026rsquo;m seeing first-hand how these tools are reshaping what a single developer (or a small team) can accomplish. I want to write about that.\nWhat to expect The blog is going to be a bit different this time around. You\u0026rsquo;ll still see technical content, but the focus is shifting:\nAI and LLMs in practice - not the hype, but the real experience of building with these tools Side project diaries - the messy reality of building products in your spare time The occasional opinion piece - because some things haven\u0026rsquo;t changed When I started e4developer, I wrote over 100 posts in the first year. That was quite a challenge! This time around, no such challenges - just writing for fun and sharing things I find genuinely interesting.\nA fresh start You might notice the blog looks a bit different. I migrated the whole thing from WordPress to Hugo, gave it a new design, and moved it to GitHub Pages. All the old articles are still here - every single one of them. The HATEOAS explanation, the Spring Boot best practices, the \u0026ldquo;please stop writing for loops\u0026rdquo; post that apparently resonated with a lot of people. They\u0026rsquo;re all part of the story.\nIf you\u0026rsquo;ve been reading e4developer before - thank you. Your support over the years meant more to me than you probably realize. And if you\u0026rsquo;re new here - welcome! I hope you find something useful.\nLet\u0026rsquo;s see where this goes. Till next time!\n","permalink":"https://e4developer.com/posts/e4developer-is-back/","summary":"\u003cp\u003eIt has been almost six years since my last post here. Six years! When I wrote about joining Facebook back in 2020, I had no idea it would be this long before I\u0026rsquo;d write again. Life got busy, priorities shifted, and the blog quietly went to sleep.\u003c/p\u003e\n\u003cp\u003eWell, it\u0026rsquo;s waking up now.\u003c/p\u003e\n\u003ch2 id=\"what-happened-since-2020\"\u003eWhat happened since 2020\u003c/h2\u003e\n\u003cp\u003eA lot has changed. Facebook became Meta, and I moved from software engineering into engineering management. I currently run a team in Meta\u0026rsquo;s Superintelligence Labs (MSL), where we build the data infrastructure behind Meta\u0026rsquo;s generative AI efforts - web crawlers, synthetic data pipelines, and infrastructure for AI agents.\u003c/p\u003e","title":"e4developer is back"},{"content":"It has been a while since I last time wrote a post here, hence I wanted to update you on what is happening with the website (why the lack of activity) and what’s new in my life. And there is plenty!\nJoining Facebook When I wrote my last blog post here (October 2019) I was in a full-on Facebook interview preparation mode. I really enjoyed my time at Scott Logic, but Facebook has been my long time dream and I decided to try to get in one more time.\nMultiple hours of leetcode, second reading of Cracking the Code Interview and some serious work with Elements of Programming Interviews in Java I felt ready for the series of interviews that Facebook tests the applicants with. To be fair, writing this blog and working for Scott Logic helped a lot as well.\nInterviews went well, I got the offer and I joined Facebook in March 2020. I still can’t quite believe that as this really has been my dream and now it is a reality!\nWriting on E4developer.com With all the interview preparation and how seriously I was taking it, I didn’t really have time for writing anything here- I still had a full-time job after all!\nMoving forward I plan to keep writing here, although you will likely see less of Spring and Java, as I have a lot of other technologies to catch up with these days.\nWhat you may start seeing more is articles about my other interests such as IoT (Raspberry Pi), and especially chess which I have been playing quite a lot recently!\nChess, chess, chess… So, I have been playing a lot of chess recently. Mainly on lichess, where if you are keen you can challenge me one day.\nI am having loads of fun learning with chessable as well, which I find much more engaging than written books. It’s basically a website that provides chess learning material, but in an interactive form- highly recommended if you are a more serious player.\nI am also thinking of starting a website dedicated to chess, maybe even some chess streaming/videos. I would like to keep content on e4developer mainly developer-oriented, so you probably won’t see any chess-only content here.\n2020 – living in an alternative reality So with all these changes, we also have the current worldwide situation, where we are locked in our homes. If leaving my job and Facebook wasn’t crazy and overwhelming enough, now there is that!\nIn reality, I am doing quite well, spending much more time with my family, as we are working from home. I am really grateful for that, as I am trying to focus on the positives in this situation. Being a software developer really is a privilege, as we can work from home much easier than most professions out there.\nI hope that whoever you are, you can also focus on some positives in this horrible situation.\nThank you! Thank you for all of you reading my blog. This was a tremendous motivation for the past few years and really gave me the confidence to pursue my dreams and energy to keep getting better. Till the next post!\n","permalink":"https://e4developer.com/posts/e4developer-facebook-chess-and-life-in-2020/","summary":"\u003cp\u003eIt has been a while since I last time wrote a post here, hence I wanted to update you on what is happening with the website (why the lack of activity) and what’s new in my life. And there is plenty!\u003c/p\u003e\n\u003ch2 id=\"joining-facebook\"\u003eJoining Facebook\u003c/h2\u003e\n\u003cp\u003eWhen I wrote my last blog post here (October 2019) I was in a full-on Facebook interview preparation mode. I really enjoyed my time at Scott Logic, but Facebook has been my long time dream and I decided to try to get in one more time.\u003c/p\u003e","title":"E4developer, Facebook, chess and life in 2020"},{"content":"On the 24th of September 2019, I had an opportunity to speak at the very first DevOps Roundabout meetup in London. You can watch my talk on YouTube.\nThe idea behind this talk is the same as the one behind my whitepaper (with the identical title). First, explaining to the wider audience what the DevOps movement is really all about and then helping people to embark on that journey.\nThe difference in the talk is that I do not focus specifically on the public sector and think in broader terms- how everyone can embark on this journey.\nAnother reason to watch the talk (rather than simply read the newspaper) is that I offer a more conversational coverage of these topics and discuss things that I did not touch on in the whitepaper, such as the difference between DEVops and devOPS engineers.\nIf you are intrigued, check out the talk on YouTube and let me know what you think:\n","permalink":"https://e4developer.com/posts/journey-to-devops-talk-at-the-devops-roundabout/","summary":"\u003cp\u003eOn the 24th of September 2019, I had an opportunity to speak at the very first \u003ca href=\"https://www.meetup.com/The-DevOps-Roundabout/\"\u003eDevOps Roundabout\u003c/a\u003e meetup in London. You can \u003ca href=\"https://www.youtube.com/watch?v=EGTMkZkPhF8\"\u003ewatch my talk on YouTube\u003c/a\u003e.\u003c/p\u003e\n\u003cp\u003eThe idea behind this talk is the same as the one behind \u003ca href=\"https://e4developer.com/posts/the-journey-to-devops-my-first-white-paper/\"\u003emy whitepaper\u003c/a\u003e (with the identical title). First, explaining to the wider audience what the DevOps movement is really all about and then helping people to embark on that journey.\u003c/p\u003e\n\u003cp\u003eThe difference in the talk is that I do not focus specifically on the public sector and think in broader terms- how everyone can embark on this journey.\u003c/p\u003e","title":"Journey to DevOps - Talk at the DevOps Roundabout"},{"content":"AWS Identity and Access Management (IAM) is one of the most important services available in AWS. Most people know that you can create user accounts and assign permissions (policies). In this blog post, I will look at a few more advanced features of the AWS IAM.\nAWS policies can have conditions AWS policies can have conditions. That means that you can apply a policy that would only work in a specific time window. This can be for example next Wednesday between 1 pm and 4 pm if you expect a user to need specific permission around this time. In general, you can use these conditions to grant and revoke permissions to do things at specific dates and times.\nAn example policy condition that gives access in a specific time window will look as follows:\n\u0026#34;Condition\u0026#34; : { \u0026#34;DateGreaterThan\u0026#34; : { \u0026#34;aws:CurrentTime\u0026#34; : \u0026#34;2019-07-31T13:00:00Z\u0026#34; }, \u0026#34;DateLessThan\u0026#34;: { \u0026#34;aws:CurrentTime\u0026#34; : \u0026#34;2019-07-13T16:00:00Z\u0026#34; } } Of course, if it was only about dates and times it would not be very exciting, you can also set the conditions around things such as:\nUsernames Source IPs SSL being used (to ensure only secure requests are respected) Tag keys (to further customise conditions) And some more as described here: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html It is important to be aware of these possibilities, as setting tight access control is the best way of avoiding your account being damaged, or compromised.\nAnother significant benefit of these conditions is reducing the manual workload of granting and revoking permissions when you know the access need in advance.\nAWS policies can be dynamic Imagine a scenario when you want to give IAM users “home directories” in S3. If you are not aware of AWS dynamic policies it may look like you need to create everyone a policy with something like:\n{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Action\u0026#34;: [\u0026#34;s3:ListBucket\u0026#34;], \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Resource\u0026#34;: [\u0026#34;arn:aws:s3:::homes\u0026#34;], \u0026#34;Condition\u0026#34;: {\u0026#34;StringLike\u0026#34;: {\u0026#34;s3:prefix\u0026#34;: [\u0026#34;Bartosz/*\u0026#34;]}} }, { \u0026#34;Action\u0026#34;: [ \u0026#34;s3:GetObject\u0026#34;, \u0026#34;s3:PutObject\u0026#34; ], \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Resource\u0026#34;: [\u0026#34;arn:aws:s3:::homes/Bartosz/*\u0026#34;] } ] } The good news is that AWS will let you create a policy like this:\n{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Action\u0026#34;: [\u0026#34;s3:ListBucket\u0026#34;], \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Resource\u0026#34;: [\u0026#34;arn:aws:s3:::homes\u0026#34;], \u0026#34;Condition\u0026#34;: {\u0026#34;StringLike\u0026#34;: {\u0026#34;s3:prefix\u0026#34;: [\u0026#34;${aws:username}/*\u0026#34;]}} }, { \u0026#34;Action\u0026#34;: [ \u0026#34;s3:GetObject\u0026#34;, \u0026#34;s3:PutObject\u0026#34; ], \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Resource\u0026#34;: [\u0026#34;arn:aws:s3:::homes/${aws:username}/*\u0026#34;] } ] } That is much more maintainable, as it can be then saved as a managed policy and given to all the users that need it.\nThe same variables that you have seen used in conditions, can be generally used for making policies dynamic. Again, if you want to read some more official documentation on the topic, have a look here: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html\nUsing separate AWS accounts is sometimes the right thing to do One thing that confused me when I started using AWS in the real-world project was the idea of having multiple AWS accounts for different things. I assumed that given such a powerful IAM solution, we should use one account as much as possible. Wrong!\nThere are good reasons for using separate AWS accounts when there is a need for that. For example, is perfectly ok to have a separate development account and a production account to make the separation between resources as strong as possible.\nYou may want to have administrators on one account that have root-level access that you wouldn’t want to give that access on another account. It also makes sense when you have multiple strictly separate departments in your company.\nI don’t want to go in-depth on every scenario here, as there are many. The point I want to make- if you feel like you may need another AWS account for something you could be right! Don’t necessarily reject the idea because we have a powerful IAM solution in AWS.\nAWS roles can work better than accounts and policies Ok, so you have multiple accounts in your organisation now, but you would like to share some resources between them. One way would be to use bucket policies. This sounds easy enough, but there is a problem. When an S3 object is created, the account that owns the bucket does not own the object. This is often not what you want.\nIn this and many other scenarios, the solution is to use AWS cross-account role. With this, you give an IAM user from another account a role that they can assume in your account. That will let you manage the permissions from IAM and retain ownership of the objects without resolving to hacks such as requiring explicit permission grants in IAM policies.\nThis may sound a bit complicated, but again the most important here is the general message. Creating IAM roles that can be assumed across accounts is often the most maintainable, safe and easy way to deal with sharing resources across accounts.\nSummary AWS IAM is a very powerful IAM solution. In order to manage a non-trivial AWS account properly, you should familiarise yourself with it in depth. On the other hand, you may already be proficient in IAM, but there could be features that you may not be aware of, that would make your life much easier.\nNo matter if you are just starting, or if you are an experienced AWS user, it is worth refreshing your AWS IAM knowledge now and then! I will leave you with a link to the official documentation: https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html\nLet me know if you have a favourite AWS IAM feature that you rely on.\n","permalink":"https://e4developer.com/posts/aws-iam-looking-at-some-of-the-more-advanced-features/","summary":"\u003cp\u003eAWS Identity and Access Management (IAM) is one of the most important services available in AWS. Most people know that you can create user accounts and assign permissions (policies). In this blog post, I will look at a few more advanced features of the AWS IAM.\u003c/p\u003e\n\u003ch2 id=\"aws-policies-can-have-conditions\"\u003eAWS policies can have conditions\u003c/h2\u003e\n\u003cp\u003eAWS policies can have conditions. That means that you can apply a policy that would only work in a specific time window. This can be for example next Wednesday between 1 pm and 4 pm if you expect a user to need specific permission around this time. In general, you can use these conditions to grant and revoke permissions to do things at specific dates and times.\u003c/p\u003e","title":"AWS IAM - Looking at some of the more advanced features"},{"content":"I have been posting a little bit less recently. In some ways, it is easier to write two blog posts a week than one blog post once in a while. I have recently read the book titled “Atomic Habit” that motivated me to fix this state of affairs!\nAs you might know, I have written over 100 blog posts in 2018 as a sort of personal challenge. It was not easy, but it was immensely satisfying! You can check out my summary post about this.\nIn 2019 I wanted to focus more on building things, experiment with video and maybe learn more programming languages. I planned to reduce my blogging to about once a week, but I didn’t plan to hold myself accountable… And here is the issue- I have lost a good habit!\n“Atomic Habits: An Easy and Proven Way to Build Good Habits and Break Bad Ones” by James Clear is a really interesting book that made it clear to me what I was doing wrong. I was spending too much time planning things and too little time doing things. This includes blogging. I don’t want to summarise the whole book for you, but definitely give it a read or check it out on Audible.\nSo what is next? I have decided to simplify my approach to choosing what given project I am currently working on and how to maintain my blogging. I will basically carry out two habits:\nWork only on my main project– that means work on a specific thing until completion (whatever that means) Write a blog post about my main project (or something else if I have it ready) once a week. This way I will make sure to achieve something substantial before picking up my next project, as well as re-build my habit of regular blogging.\nAWS Solution Architect Professional Certification – my Main Project My current project is to pass the AWS Solution Architect Professional Certification. I already have the Associate level certification, but I would like to upgrade my knowledge by taking on the much harder version of that exam.\nI am not really doing that for the sake of the certificate, rather I would like to get to know AWS to a much deeper level, as it is very important for my work. The exam provides nice guidance and the certificate would be useful even for my employee. It makes sense.\nWith that, you can expect quite a few extra articles about AWS and the things that I found particularly interesting when learning. Even when you feel like you “get it” when it comes to the cloud, there are still many techniques and designs that may surprise you.\nWhat about the videos and courses? At some point, I was working towards a video course about microservices. This is still a long term plan of mine. The problem was that I was working on that course while automating my house with Raspberry Pi, learning AWS and doing a multitude of other things. This is a large project and requires focus.\nMaking a high-quality video course is likely to be my next large project after I complete the AWS certification. The reality is that I am not yet quite ready to make a course of the quality I wish.\nTo get ready I need to get much more experience with teaching and video. As a remedy, I plan to create some YouTube videos now and then… I know that I said I will work only on my main project- the idea is to make the videos around learning AWS!\nExpect to read more from me! With this, I would like to leave you with a promise of more content from me. Also if you have managed to read that far, I want to really thank you. People reading and engaging with what I create are the main motivation to keep writing. Get ready to learn some AWS!\n","permalink":"https://e4developer.com/posts/aws-solution-architect-pro-good-habits-and-blogging/","summary":"\u003cp\u003eI have been posting a little bit less recently. In some ways, it is easier to write two blog posts a week than one blog post once in a while. I have recently read the book titled “Atomic Habit” that motivated me to fix this state of affairs!\u003c/p\u003e\n\u003cp\u003eAs you might know, I have written over 100 blog posts in 2018 as a sort of personal challenge. It was not easy, but it was immensely satisfying! You can check out my \u003ca href=\"https://e4developer.com/posts/i-wrote-100-blog-posts-in-2018-how-it-went-and-whats-next/\"\u003esummary post\u003c/a\u003e about this.\u003c/p\u003e","title":"AWS Solution Architect Pro, Good Habits and Blogging"},{"content":"I am very excited to share with you my mini video course on “Code Sharing in Microservices Architecture”.\nThe course consist of 5 videos:\nMicroservices – Code Sharing Microservices – Sharing Libraries Microservices – Sharing Integration Code Microservices – Sharing Domain Objects Microservices – Code Sharing Summary This is the first time I have ever done anything like this, so I am looking forward to the feedback. Do you enjoy these kind of videos, would you like to see more? Which video did you like the most/the least?\nHere is the entire playlist:\n","permalink":"https://e4developer.com/posts/code-sharing-in-microservices-architecture-youtube-course/","summary":"\u003cp\u003eI am very excited to share with you my mini video course on “Code Sharing in Microservices Architecture”.\u003c/p\u003e\n\u003cp\u003eThe course consist of 5 videos:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://youtu.be/5b49rSUFS1w\"\u003eMicroservices – Code Sharing\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://youtu.be/cnttrCzlhYE\"\u003eMicroservices – Sharing Libraries\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://youtu.be/w9R3KN7AW54\"\u003eMicroservices – Sharing Integration Code\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://youtu.be/w3B-BshE4j0\"\u003eMicroservices – Sharing Domain Objects\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://youtu.be/RghZ_pOU4XA\"\u003eMicroservices – Code Sharing Summary\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eThis is the first time I have ever done anything like this, so I am looking forward to the feedback. Do you enjoy these kind of videos, would you like to see more? Which video did you like the most/the least?\u003c/p\u003e","title":"Code Sharing in Microservices Architecture - YouTube Course"},{"content":"The first language I used to write a small program was Pascal. Since then I have worked professionally with Java, JavaScript, Groovy and a few more. Currently, I am learning a bit of Go in my spare time. In this blog post, I want to encourage you to learn a new language as well and provide you with a few ideas.\nThe more you know the easier it gets First of all, I have noticed that the more languages you already know, the easier it is to learn new ones. I guess like with everything, you start seeing familiar patterns and solutions and in general- things start to make sense much quicker.\nNumerous languages are also very similar. For example, if you already know Java, learning Groovy is very simple. Knowing many languages may come useful when you suddenly have to start learning a new one for your job.\nMastering a language You don’t have to master every language you come to work with. Sometimes you need to know just enough to be productive.\nI have recently worked on a small IoT project where Python made the most sense, as all the libraries and examples I had were using Python. If you are already a programmer learning Python enough to write a Flask server can be done in a weekend and it is quite fun!\nBeside my advice to try many languages, it is genuinely useful to also master a language or two. If your full-time employment revolves around writing Java, you really should know it in and out. Even all that stuff about nested classes and concurrency. You owe it to your employer (or clients) and yourself!\nJava vs C# Speaking of Java, I have realised that I used to go into arguments about the superiority of Java vs C# or the other way round. The best solution to that problem is to learn both if you already know one.\nWith Java and C#, you can see subtle (or not so subtle) differences with how things are done. Learning about LINQ or Spring Boot can be a very interesting experience for people from either camp.\nI have been looking to refresh my C# knowledge (the last large program I wrote was for my master thesis) as it is used in games development with Unity.\nMake sure that something like Java vs C# argument never closes doors for you to exciting technologies, be it game development or big data!\nJavaScript and the brand new world If moving between Java and C# is a rather gently jump, then going into JavaScript development with Node.js is a whole new level!\nThese days there is so much JavaScript code being written and used, that as a developer you really should look into it. There is a good chance that you will either write or read some JavaScript in the near future… it is also really interesting!\nDespite its many flaws (or strengths as some would argue!) JavaScript is an insanely popular language, that is appearing everywhere. Frontends, microservices, even serverless development or voice assistants like Alexa have some of their core libraries written mainly for JavaScript.\nIf you want to learn something super practical, definitely get literate with JavaScript, Node.js and the NPM ecosystem.\nDid someone say RaspberryPi… or Serverless or Machine learning? Python is here. Python is currently the fastest growing “mainstream” language. This is at least according to the TIOBE index. There is quite a renaissance in the language at the moment.\nThe Python recent rise in popularity can be attributed to many factors:\nData scientists love affair with Python Serverless architectures having a good fit and support Python being overall a great language Maybe even IoT Mysterious workings of the universe I am using Python for controlling my RaspberryPi and having great fun with it. It is very clean, pretty and expressive language. If you are looking for something pleasant to write in- give it a try.\nThe functional world of Scala and Haskell Ok, so you really want to try something new? The functional programming promises much cleaner code, fewer bugs and better programming experience.\nTo be honest, I sometimes wish days had more hours, as I never managed to get into functional programming deep enough in the “real world”. I used Haskell at university and it was nothing like I ever programmed with before or after.\nIf you want to approach functional programming with practicality in mind, possibly even replacing Java, you should seriously look at Scala. This is a truly functional programming language used in the industry.\nIf you are looking for an academic approach to functional programming and want to learn more about the science of it, Haskell is probably best-suited language for that. If you disagree, I would be happy to read your opinion in the comments.\nDevOps fueled Go We talked about quite a few languages, but what is the one language everyone involved in DevOps is talking about? It is Go!\nDocker, Terraform, Istio, Kubernetes… Do I need to go on? All of these technologies rely on Go. There is no other language out there who comes even close to popularity in the DevOps world.\nI believe the reasons for this are:\nGo is super fast compared to everything other than C/C++/Rust Go is very easy to learn Go makes it easy to work with networking and system Go has a nice multi-threading support If you are serious about infrastructures, containers etc. Go is the language to learn.\nThe future with Rust Guess what is the most loved language out there, year by year? Well… if you read the headers here, you probably guessed- according to yearly developers survey by StackOverflow (2019 here, Python is second!) it is Rust!\nRust is an incredibly fast system programming language, that can compete with C/C++ on speed… While being very modern and pleasant to work with! I have not gone much beyond “hello world” with my Rust knowledge, but getting the basics down is high on my “to do” list for 2019.\nWhat about everything else? I did not mention here learning C++, PHP, SQL or Swift. Does that mean that you should not learn them? Of course not! If you are interested to learn a different language or have a good reason to, you absolutely should.\nI wanted to show you how many good reasons are there for getting familiar with different languages. The reality is that once you learned one, you know you can learn anything! Just motivate yourself and enjoy the journey!\n","permalink":"https://e4developer.com/posts/you-should-learn-multiple-programming-languages/","summary":"\u003cp\u003eThe first language I used to write a small program was Pascal. Since then I have worked professionally with Java, JavaScript, Groovy and a few more. Currently, I am learning a bit of Go in my spare time. In this blog post, I want to encourage you to learn a new language as well and provide you with a few ideas.\u003c/p\u003e\n\u003ch2 id=\"the-more-you-know-the-easier-it-gets\"\u003eThe more you know the easier it gets\u003c/h2\u003e\n\u003cp\u003eFirst of all, I have noticed that the more languages you already know, the easier it is to learn new ones. I guess like with everything, you start seeing familiar patterns and solutions and in general- things start to make sense much quicker.\u003c/p\u003e","title":"You should learn multiple programming languages"},{"content":"Working for consultancy, I already had a chance to build microservice based systems in large financial organisations as well as public sector ones. When sharing my experience with other developers, there is one topic that often comes up- many people wished that they had a service mesh from the start! In this article, I will explain what a service mesh is and why is it so useful!\nService mesh defined A service mesh is a dedicated infrastructure layer that helps with managing your service to service communication. It is often implemented as a lightweight reverse-proxy using the sidecar pattern, deployed in a separate container. The two most popular implementations are currently Istio and Linkerd (which absorbed Conduit).\nThe common features that are provided by a service mesh include:\nControlling the traffic flow and API calls making testing, upgrading and managing the system easier Managing your authentication, authorization and encryption which are vital for a secure, production-ready system Providing rich tracing and monitoringmaking debugging easier Minimum performance overheadIstio and Linkerd proxies are implemented in C++ and Rust respectively making them blazing fast Real microservices, real problems Here I want to give you a selection of “battle stories” that would be much easier if the respective systems were using a service mesh:\nReplacing an authorization layer across the project – this system had a proof of concept style authorization layer that was later replaced with Keycloak. This included changes to every single microservice and many shared libraries. This may not have been trivial in a service mesh, but having the authorization layer clearly separated from individual services would have resulted in much less work. Introducing encryption everywhere – in this scenario, suddenly came a requirement to introduce strong encryption for pretty much every single communication. With nothing like a service mesh in place, you can imagine how difficult and time consuming this was. In comparison, a service mesh would have solved that trivially. Audit requirement for all API calls – one thing is the ability to debug problems in a system, another is a requirement to audit all the API calls. Once you get the idea of what a service mesh is, this again seems like a perfect use-case that can be a pain if all microservices are deployed “vanilla” style. Switching between databases – maybe not as difficult in a standard architecture, but definitely not as clean and more invasive. In the case described here, it always required restarts of all microservices (re-deployments) in Kuberenetes. Think of the time wasted! I could go on here, but I wanted to focus on real examples that did happen and made the team go- “if only we had a service mesh!”. So why did not we?\nService mesh – the initial investment Installing Istio or Linkerd is not particularly difficult. Especially when you are just starting to build your system. The problem is that when you are only starting, it may not be immediately obvious that you need a service mesh! Your system is still small and easy to work with, why add all that bloat?\nThis is where the experience comes in. Most of the developers I worked with or talked to are either on their first, or second large scale microservices project. Most people lack the practical experience to realise that service mesh is such an amazing pattern. If you are reading this- trust me! Your team will thank you if you convince them to spend just a little extra time adding it to your system.\nI believe that in a few years service mesh will become an absolute bog-standard for all new microservice architectures. All we need is more time and experience in the community, hence- spread the word.\nBut why not just use API Gateway between services?… An interesting idea that I have encountered is using an API Gateway between microservices as an easy solution. DO NOT USE API GATEWAY IN PLACE OF A SERVICE MESH. I need to go all caps here, as this is a trap!\nAPI Gateways look like a very similar offering, but they are much less integrated and often much to slow for a large microservices system. Some of my colleagues had an unfortunate experience of falling into this trap and then having to completely remove the “internal API Gateway”. To be honest, in the past I also thought it sounds like a good idea…\nAn “internal API Gateway” sounds like an easier thing to do, but it can have a dramatic performance and security impact on your system. Please save yourself a lot of time and pain and use a proper service mesh instead.\nSummary The “service mesh” is a crucial microservices architecture pattern that you should know. If you are starting a new microservices project, consider including Istio or Linkerd (or another service mesh) as a part of your system. If you are already running microservices, investigate how hard or easy would it be to add a service mesh. If you already have experience with a service mesh, let me know in the comments or share it on Twitter.\n","permalink":"https://e4developer.com/posts/microservices-why-do-you-need-a-service-mesh/","summary":"\u003cp\u003eWorking for consultancy, I already had a chance to build microservice based systems in large financial organisations as well as public sector ones. When sharing my experience with other developers, there is one topic that often comes up- many people wished that they had a service mesh from the start! In this article, I will explain what a service mesh is and why is it so useful!\u003c/p\u003e\n\u003ch2 id=\"service-mesh-defined\"\u003eService mesh defined\u003c/h2\u003e\n\u003cp\u003eA service mesh is a dedicated infrastructure layer that helps with managing your service to service communication. It is often implemented as a lightweight reverse-proxy using the sidecar pattern, deployed in a separate container. The two most popular implementations are currently \u003ca href=\"https://istio.io/\"\u003eIstio\u003c/a\u003e and \u003ca href=\"https://linkerd.io/\"\u003eLinkerd\u003c/a\u003e (which absorbed Conduit).\u003c/p\u003e","title":"Microservices - Why Do You Need A Service Mesh?"},{"content":"You probably heard about Ops, DevOps, maybe even about GitOps! This short article is a “jargon buster” explaining what all these different terms mean and which ones you should pay attention to.\nOps This is a shorthand for IT Operations. That means running software, providing support and administrating networks and servers. Basically most of the important day-to-day IT business that is not testing or development.\nAn interesting, but the wrong trend these days is to simply re-name your Operations team as DevOps team, pat each other on the back and think that you are following best practices… If only life was that easy!\nDevOps I will use the same definition that I gave in my “Journey to DevOps Whitepaper”:\n“The name DevOps comes from the amalgamation of Development and Operations. In essence, it’s a software development methodology best defined by the “DevOps Mindset” which guides its implementation and\nmanagement style, and the “DevOps Culture” it creates.”\nThe point here is that DevOps is a software development methodology and a culture, not a single team. The “Three Ways of DevOps” are:\nFlow: Progressing work fast Feedback: Getting feedback on the work as soon as possible Continual Experimentation and Learning: self-explanatory If you want to learn more about DevOps principles, I really recommend you checking my whitepaper, or reading the “Journey to DevOps” blog post I wrote on the Scott Logic website.\nSecOps I personally would consider SecOps a part of DevOps with the focus on security. It makes sense to talk about it separately if you want to focus exclusively on how well security is integrated into your DevOps culture.\nIt is also all about the three pillars – flow, feedback and continual learning, but applied to the work of security specialists.\nTestOps Guess what? It is DevOps once again but with a focus on testing. Think about automated testing and quality is on everybody’s mind. I consider testing so fundamental to DevOps and software delivery, that I don’t really see the need for the distinction, but I am sure that some marketing departments do!\nGitOps This means infrastructure as code, but using Git to store that code. I think it is a great idea, I may not like the name very much (riding on the “Ops” hype), but there is some merit to it. Weave works have a great explanation for the practice here.\nSummary DevOps is a true revolution when it comes to delivering software. Because of the tremendous and positive impact it had, everyone wants to somehow associate with the term. There is nothing wrong with this, but please make sure to focus on the DevOps culture, not on coming up with even fancier names, while not changing how you work!\nPS: If I missed any of the Ops, let me know! It is fun to see what more people are coming up with!\n","permalink":"https://e4developer.com/posts/quick-guide-to-all-the-ops/","summary":"\u003cp\u003eYou probably heard about Ops, DevOps, maybe even about GitOps! This short article is a \u003cem\u003e“jargon buster”\u003c/em\u003e explaining what all these different terms mean and which ones you should pay attention to.\u003c/p\u003e\n\u003ch1 id=\"ops\"\u003eOps\u003c/h1\u003e\n\u003cp\u003eThis is a shorthand for IT Operations. That means running software, providing support and administrating networks and servers. Basically most of the important day-to-day IT business that is not testing or development.\u003c/p\u003e\n\u003cp\u003eAn interesting, but the wrong trend these days is to simply re-name your Operations team as DevOps team, pat each other on the back and think that you are following best practices… If only life was that easy!\u003c/p\u003e","title":"Quick Guide to all the “Ops”"},{"content":"With Oracle stopping free updates for Java 8 and effectively only maintaining free updates with the latest Java release (12 at the time of writing) a natural question arises… Which JDK should I use? This is a short article providing answers, depending on your circumstances.\nLet’s look at different scenarios that you may be facing:\nYou are using Java 8 and want to keep Long Term Support (LTS) for free In this case, you should use Amazon Correto OpenJDK 8 distribution. Amazon Correto is a free JDK distribution that will provide you with free long term support:\nLong-term support (LTS) for Corretto includes performance enhancements and security updates for Corretto 8 until at least June 2023 at no cost.\nCorretto FAQ\nAlternatively, Azul provides another JDK 8 distribution called Zulu. This also comes with LTS. It is worth checking to see, as they also provide support plans if you may wish to transfer from free to paid Java…\nIt is worth knowing that OpenJDK is also an option and there are projects keeping it up to date.\nYou want to use the latest LTS version of Java (11 currently) for free Once again, I recommend going with Amazon Correto or alternatively Azul Zulu.\nYou are using Java and want pay for Long Term Support (LTS) If for some reason you want to pay for support, you have two popular options: Oracle or Azul. If you are willing to pay for your Java you will be better off doing some additional research making sure that the support terms suit you. This is beyond the scope of this article.\nIt is worth noting that Azul Zulu also provides LTS support for versions that are not under LTS from Oracle.\nYou are Azure or AWS consumer already For those that are already Amazon or Microsoft Cloud consumers, there is additional support provided for respectively Correto and Azulu Zulu.\nIf you have AWS Support Plan, *“you can reach out for assistance with Corretto through your plan.”*This is clarified in Corretto FAQ.\nIf you are developing Java software on Azure, you can use Azul Zulu Enterprise for free. Quote from the release:\n“Java developers on Microsoft Azure and Azure Stack can build and run production Java applications using Azul Systems Zulu Enterprise builds of OpenJDK without incurring additional support costs”\nlink to the press release\nYou want to use the latest Java available In this case, you can freely choose between Oracle, OpenJDK and Azul Zulu. All these latest versions are free, but if you go with Oracle or OpenJDK you may need to keep upgrading to the new Java versions to keep receiving security patches.\nSummary You don’t need to pay to use any version of Java, as there are plenty of supported distributions available. Make sure that you are using the right one and keep coding!\n","permalink":"https://e4developer.com/posts/which-java-jdk-should-i-use-which-provide-free-lts/","summary":"\u003cp\u003eWith Oracle stopping free updates for Java 8 and effectively only maintaining free updates with the latest Java release (12 at the time of writing) a natural question arises… Which JDK should I use? This is a short article providing answers, depending on your circumstances.\u003c/p\u003e\n\u003cp\u003eLet’s look at different scenarios that you may be facing:\u003c/p\u003e\n\u003ch2 id=\"you-are-using-java-8-and-want-to-keep-long-term-support-lts-for-free\"\u003eYou are using Java 8 and want to keep Long Term Support (LTS) for free\u003c/h2\u003e\n\u003cp\u003eIn this case, you should use \u003ca href=\"https://aws.amazon.com/corretto/\"\u003eAmazon Correto\u003c/a\u003e OpenJDK 8 distribution. Amazon Correto is a free JDK distribution that will provide you with free long term support:\u003c/p\u003e","title":"Which Java JDK should I use? Which provide free LTS?"},{"content":"I am quite excited to share with you “The Journey to DevOps” (announcement and the download link here), the first white paper that I have published with my company Scott Logic.\nThis white paper will help you:\nUnderstand the DevOps mindset Learn more about challenges facing public sector organisations when adopting DevOps Help you assess your DevOps maturity Present actionable advice on embarking on your DevOps journey Appreciate the value that comes from embracing DevOps If you are interested in reading more, download our white paper from Scott Logic blog: “The Journey to DevOps” – in PDF format.\n","permalink":"https://e4developer.com/posts/the-journey-to-devops-my-first-white-paper/","summary":"\u003cp\u003eI am quite excited to share with you “The Journey to DevOps” (\u003ca href=\"https://blog.scottlogic.com/2019/03/25/the-journey-to-devops.html\"\u003eannouncement and the download link here\u003c/a\u003e), the first white paper that I have published with my company \u003ca href=\"https://www.scottlogic.com/\"\u003eScott Logic\u003c/a\u003e.\u003c/p\u003e\n\u003cp\u003eThis white paper will help you:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003eUnderstand the DevOps mindset\u003c/li\u003e\n\u003cli\u003eLearn more about challenges facing public sector organisations when adopting DevOps\u003c/li\u003e\n\u003cli\u003eHelp you assess your DevOps maturity\u003c/li\u003e\n\u003cli\u003ePresent actionable advice on embarking on your DevOps journey\u003c/li\u003e\n\u003cli\u003eAppreciate the value that comes from embracing DevOps\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eIf you are interested in reading more, download our white paper from Scott Logic blog: \u003ca href=\"https://blog.scottlogic.com/2019/03/25/the-journey-to-devops.html\"\u003e“The Journey to DevOps” – in PDF format\u003c/a\u003e.\u003c/p\u003e","title":"“The Journey to DevOps” - my first white paper"},{"content":"We have all had quite a lot of time to get familiar with Java 8 and all the amazing features that it provided us with. Since then quite a lot have happened, with the release of Java 9, 10, 11 and this month 12 it is hard to stay on top of all the language changes happening! In here I will focus exclusively on the changes to the language leaving library changes to other writers.\nIt is important to know the new APIs and libraries introduced, but knowing how to read the language is vital. This will also make this blog post within a reasonable scope- after all, really a lot changed from Java 8 to Java 12! Without further ado- let’s begin, version by version!\nJava 9 – Project Jigsaw and the rise of Modules The headline feature of Java 9 was the introduction of Modules (Project Jigsaw). At the top level of the project, you can define a module-info.java file that looks something like this:\nmodule com.e4developer.modules.tricks { requires com.e4developer.modules.secret.sauce; exports com.e4developer.modules.tricks.trade; } And this will let you modularise your application better. If you are interested in this concept, you probably need quite a lot more than this short paragraph, but this should give you an idea.\nJava 9 also introduced private methods in interfaces. This makes writing lengthy default interface methods somewhat more pleasant, but then again- you should be careful with this in the first place… Interface default methods should be used primarily for ensuring backwards compatibility of your APIs. Here you have a trivial example:\nprivate interface: public interface Spaceship { default void keepTalking(){ System.out.println(makeText()); } default void keepShouting(){ System.out.println(makeText().toUpperCase()); } /** * Be careful with private methods in interfaces. * Main place for implementation is inside classes. * @return */ private String makeText(){ String message = \u0026#34;hey there!\u0026#34;; while(message.length() \u0026lt; 1000) message += message; return message; } } We can use a diamond operator with anonymous inner classes making this legal:\nSuperCalculator\u0026lt;Integer\u0026gt; superCalculator = new SuperCalculator() { //implementation }; And try with resources does not need a variable explicitly declared in the statement. This slightly improves the style:\nFinalResource finalResource= new MyFinalResource(); try (finalResource) { // use finalResource } Java 10 – introducing “var”‘ Java 10 focused mostly on adding and removing APIs, so it reads rather easily… There is only one large language change- adding local-variable type inference – aka. “var” keyword. With this you can make some of your code have a somewhat more modern feel and look to it:\nvar myABC= List.of(\u0026#34;a\u0026#34;, \u0026#34;b\u0026#34;, \u0026#34;c\u0026#34;); var numbers= List.of(1, 2, 3); for (var letter : myABC) { System.out.println(letter); } for (var i= 0; i\u0026lt; 100; i++){ System.out.println(i); } Java 11 – “var” support in lambdas Java 11 introduced full var support into lambdas. Previously you could write this:\n(Integer a, Integer b) -\u0026gt; a + b Now, this is also allowed:\n(var a, var b) -\u0026gt; a + b Language-wise, everything stays the same.\nJava 12 – the new switch statement Java 12 brings us the new, much-improved syntax for the switch statement. I will use an example from the official documentation here. Consider the following switch statement:\nswitch (day) { case MONDAY: case FRIDAY: case SUNDAY: System.out.println(6); break; case TUESDAY: System.out.println(7); break; case THURSDAY: case SATURDAY: System.out.println(8); break; case WEDNESDAY: System.out.println(9); break; } With Java 12 you can write it in a much cleaner way:\nswitch (day) { case MONDAY, FRIDAY, SUNDAY -\u0026gt; System.out.println(6); case TUESDAY -\u0026gt; System.out.println(7); case THURSDAY, SATURDAY -\u0026gt; System.out.println(8); case WEDNESDAY -\u0026gt; System.out.println(9); } I think this is a nice addition to the language. We were meant to receive raw String literals in Java 12 but that was dropped from the release… Worry not Java 13 is just around the corner!\nFinal thoughts With the new release cadence of Java features it is challenging to be aware of everything that is changing. The reality is that most changes are API and libraries related and do not affect the fundamental way that Java is written. Don’t despair, it is still possible to stay up to date with the new language features and to write good clean Java!\n","permalink":"https://e4developer.com/posts/java-9-to-12-all-the-language-modifications/","summary":"\u003cp\u003eWe have all had quite a lot of time to get familiar with Java 8 and all the amazing features that it provided us with. Since then quite a lot have happened, with the release of Java 9, 10, 11 and this month 12 it is hard to stay on top of all the language changes happening! In here I will \u003cstrong\u003efocus exclusively on the changes to the language\u003c/strong\u003e leaving library changes to other writers.\u003c/p\u003e","title":"Java 9 to 12 - all the language modifications"},{"content":"To be a good software developer you need to know some things very well. You can’t get away without actually knowing your programming language of choice, or learning good software development practices. Beside mastering some skills, you should also have a large amount of general, high-level knowledge. This blog post is about this knowledge.\nThe concept of “Just in time knowledge” The idea behind “Just in time knowledge” (from this point JITK) is to be able to get the knowledge that you need, only when you need it. For example, actually learning how to use Kafka, only when you need to work with Kafka.\nThe concept of JITK is strongly related to “Just in time learning”, but I see it as more basic. Sometimes you need only just enough knowledge about a technology, rather than actually learning how to use it.\nBefore you decide that this sounds awful, let’s consider cases where such knowledge may be useful.\nApplications of “Just in time knowledge” When choosing the right technology – often you need to consider multiple technologies that could fulfil your needs. Google can help here, but that selection process will be easier and more informed when you know where to start. At least knowing what is out there can be very helpful and will speed up the process. When designing software solutions – software design in practice is influenced by the availability of different tools and technologies. For example, when designing a microservices system, you may not consider service mesh approach if you don’t know that it is something worth considering. When solving bugs/issues – people working with software spend a lot of their time tracking down bugs and troubleshooting issues. Knowing what can possibly go wrong will make this process much easier, even if you don’t already know all the details. When learning new technology – a particularly useful case of JITK is learning new technologies. You don’t need to know everything about programming language to start using it. It is often easier to get started and fill in the blanks as you go (I suggest doing that in the context of personal learning, not necessarily professional development). I am sure that there are many more applications for “Just in time knowledge”. The message here is- this is a useful technique that should be used alongside mastery in some core technical skills.\nGoing broad rather than going deep To start applying the JITK approach, you should cultivate a broad high-level knowledge of multiple topics. An example list of things worth knowing about for a software developer focused on JVM microservices would be:\nProgramming languages Microservices patterns SOA Enterprise Java Spring Microframeworks Kubernetes AWS / Azure cloud Choreography/Orchestration Security Testing DevOps Continuous Integration / Continous Delivery Service Mesh API Gateway Agile Development practices Monitoring Debugging API Design and many more… I do not suggest becoming an expert in every single topic listed here, but it is helpful to have some ideas what each of these fields/topics contain.\nWith the high-level knowledge of multiple topics, you will be less likely to make major mistakes, as you will “know what you don’t know”.\nHow to stay informed with all the technology and processes? I have explained what you should know, so it is only fair to provide some ideas on how to foster this knowledge:\nRead blogs/newsletters – it is worth having a list of blogs that you follow to stay informed. Some of them provide interesting newsletters. I can recommend: https://www.baeldung.com/java-web-weekly and https://info.jetbrains.com/Java-Annotated-Subscription.html Stay up to date with twitter – I welcome you to follow me @e4developer, but even better- read “How to stay up to date with Java and Tech? Use Twitter!” Use Reddit – I find Reddit a good place to get a quick update on what’s how in the world of software development. Again, you can find more details here – Reddit – the Java goldmine Stay curious – if you hear a concept being mentioned that you have no clue about- look it up. You will learn a lot as long as you are surrounded by others trying to build great software. Summary I hope this blog post will give you some extra motivation for learning about things that are outside of your comfort zone. Mastery is important, but to become a great software developer, you need to know at least a little bit about a lot of things! Good luck!\n","permalink":"https://e4developer.com/posts/just-in-time-knowledge-and-the-value-of-knowing-a-little/","summary":"\u003cp\u003eTo be a good software developer you need to know some things very well. You can’t get away without actually knowing your programming language of choice, or learning good software development practices. Beside mastering some skills, you should also have a large amount of general, high-level knowledge. This blog post is about this knowledge.\u003c/p\u003e\n\u003ch2 id=\"the-concept-of-just-in-time-knowledge\"\u003eThe concept of “Just in time knowledge”\u003c/h2\u003e\n\u003cp\u003eThe idea behind “Just in time knowledge” (from this point JITK) is to be able to get the knowledge that you need, only when you need it. For example, actually learning how to use Kafka, only when you need to work with Kafka.\u003c/p\u003e","title":"“Just in time knowledge” and the value of knowing a little"},{"content":"Writing CV is a bit of a game. You are trying to create a short piece of writing, focused on yourself, that will get you to the interview. Ideally, this short text will also help set you up for a successful interview and boost your chances of landing a job. I have screened hundreds of CVs over the past few years, here I want to share with you some advice on how to win the CV game.\nWhat do you want to win with your CV? As already mentioned, you can see preparing your CV as a game. As with any game, you can play it well, or you can play it badly. Let’s look at the objectives once again:\nYou want your CV to be read rather than simply discarded You want to be selected for an interview thanks to your CV You want the interview to go your way- you want to get the sort of questions and conversations which will show you in the best possible light You want to get the job that you are going for (rather than an offer, but for a slightly different job) From this point forward, I will look at different parts of a CV and identidy good and bad strategies.\nThe length… How long would you want your CV to be? 1, 2 or maybe 8 pages?\nGood Strategies Keeping the CV short – 1-2 pages will maximise chances of someone actually making an effort to read it Trying to make it shorter rather than add unnecessary length Converting your CV to pdf to ensure that the layout stays unchanged Puting some effort into getting a clean layout- you can fit quite a lot on 2 pages Using a font that is easy to read (big enough) Bad Strategies Going significantly above 2 pages. It may be acceptable to have 3, maybe 4 pages, but you are entering dangerous territories here. There are people who straight up reject any CVs above 4 pages Thinking that the length of the CV does not matter Diluting your CV with less important/impressive things An Introduction Most CVs open with a few sentences about yourself, introducting who you are and why someone should hire you.\nGood Strategies Keeping the introduction specific to the role you are applying for. The more specific the better. Writing in the first person- it often is more engaging. Putting your biggest achievement/most impressive think about you here.. Bad Strategies Putting cliches into an introducion. Everyone is a “team player”, everyone is “motivated” etc. Writing it in a third person, in a very formal style. You can do it if you know exactly what you are doing, I would not advise it though. List of Technical Skills Most developers have list of technical skills- programming languages and frameworks that you are proficient with. This section could also go down very well, or end up a disaster. It often sets up large parts of the interview that follows.\nGood Strategies Knowing every technology that you list there pretty well Keeping it focused on the tech that is relevant to the interview (perhaps putting these languages and frameworks at the front) Organising it logically If you decided to put technologies that you are learning (but are not fully proficient yet) making it clear. Bad Strategies Turning that section into a keyword bingo. Listing every version of technology you ever worked with. Writing Java 5,6,7,8,9 or Oracle 9, 10, 11 does not look good, unless you can elaborate in depth on every specific version (and this is relevant) Putting things like MS Office, Windows, Linux etc. You can do this if you have some exceptional skills (like Linux administration, or advanced MS Excel macros etc.) Not knowing technology that you listed (lying). If you are caught it puts everything else on your CV in doubt. It is also unethical. Education / Qualification This is where you put your university, education, certification, boot camps etc.\nGood Strategies Keeping this list short and clear. List the full name of your school and qualification. Making the most relevant qualification stand out. Bad Strategies Adding useless information like the grades you got in your primary/secondary school. Not listing the full name of your institution (even if it is international, or relatively unknown). Attempting to lie, make things unclear. Listing certifications achieved many years ago in legacy technologies that you are not going to use. Experience / List of Previous Roles List of previous roles and experience. This is where many CVs go horribly wrong…\nGood Strategies Keeping the description focused on your role. Listing your personal achievements and contributions. Keeping the descriptions brief, or only including the title if this is not relevant to the job that you are applying for. Tailoring the descriptions to the job that you are applying for. Bad Strategies Repeating all the technologies that you already listed previously. Adding a long description of different companies businesses. Yes, we know that Goldman Sachs is an investment bank… Adding half a page on 10 different roles resulting in 8 pages long CV. Making all the descriptions so similar that the reader loses the will to read. If they are so similar, then your job title is probably enough. Adding a list of things that you have “participated in”. This CV should focus on you. Personal projects and interests Many CVs can include personal projects, interests etc. This can be very helpful or backfire.\nGood Strategies Add your intererst and projects if they are relevant to the job. If you are working with Arduino, or creating Alexa skills in spare time- that’s great, add it here! If you have achieved something impressive, but it is slightly unrelated to the job, you may include it as well, but don’t overdo it. If you have raised large amounts for charity, or participated in a sport at a national level- one line is probably enough. Bad Strategies Making this section very long. This is a CV, not your social media account. Adding banal things like “watching TV” or “interested in technology”. Putting this section at the start. Being creative Some CVs are little more creative than others. Creativity on your CV is something I would generally not advised for software developers. I don’t see how it can help you “win”, unless you feel that your CV would not be good enough on merit alone. Taking that into account, I advise as follows.\nGood Strategies Limiting creativity if you have good experience and are qualified for the job Focusing your creativity on things like providing good descriptions of your roles or a beautifully written introduction If your CV may be lacking, or you are from a less common background (self-learned for example, or changing professions) you may use creativity to stand out and win that interview invitation! Bad Strategies Making a very different CV when there is no need for it. Missing important information from your CV due to creativity. Summary There are different opinions on what makes a good CV. Different companies may have different cultures, so it is difficult to go with one-size-fits-all advice for CV writing. What I advise you is to take these strategies as a baseline, do some research on the places that you are applying for and develop them even further. Your battle for your dream job starts with your CV!\n","permalink":"https://e4developer.com/posts/what-makes-a-good-software-developer-cv/","summary":"\u003cp\u003eWriting CV is a bit of a game. You are trying to create a short piece of writing, focused on yourself, that will get you to the interview. Ideally, this short text will also help set you up for a successful interview and boost your chances of landing a job. I have screened hundreds of CVs over the past few years, here I want to share with you some advice on how to win the CV game.\u003c/p\u003e","title":"What makes a good software developer CV?"},{"content":"You might have heard of the 10,000-hour rule. It supposedly takes 10,000 hours of practice to become a master at something. This number will, of course, vary depending on what you are trying to master- some skills have a much higher bar than that, others may be easier. What is also important is the quality of that practice. In this blog post I will look at different ways you can carry out a deliberate practice as a developer.\nYou will not become an amazing developer by constantly doing the same easy things. Let’s be honest here- no one becomes a master driver by simply driving a car for 10,000 hours. To become an amazing driver you need to push your limits- on a race track, in difficult weather conditions, receiving critique (or timing yourself). It is somewhat similar for developers. Many have worked 40 hours a week (which would give you 10,000 hours in about 5 years), yet not everyone around us is a master!\nWhat makes some practice special? For he purpose of this article, we can characterise deliberate practice as:\nPractice requiring focus Practice that has a goal of improving performance Practice where we receive feedback on our efforts So how does that apply to software development? Can we incorporate deliberate practice in our normal work-day or does it have to be done after-hours? The good news is- there are many ways you can do it during and after your standard work day. Let’s have a look at my favourite practices.\nStart pair programming more! One unexpected way to incorporate deliberate practice in our working life is by doing more pair programming.\nLet’s think about this for a second- when you are programming with someone else, you focus more, you are trying to elevate either your programming skill or help your partner, you constantly provide feedback to each other.\nPair programming is a proven technique for achieving higher productivity. The fact that it is making you a better programmer is an added benefit!\nCreate and review merge requests Making use of merge requests is yet another simple way to incorporate some deliberate practice in our work.\nWhen you put your work up for merge request review, you are seeking a quality feedback. Make sure to act on that feedback and try herder next time- make your merge request as good as possible.\nThe same goes for reviewing merge requests. Take time, think the code through and make sure that you understand definitely everything that is contain in that code. This is a good reason why reviewing more experience developers code can be very helpful.\nDon’t shy away from the challenge Another obvious way to push yourself more in your development job is… to push yourself more! Make sure that you are not always picking the same easy tasks and features that you know how to develop.\nIf you are a seasoned back-end developer- take the hardest tasks available related to your domain, or even- try something new. Put yourself out of your comfort zone. Maybe even work on a front-end story. Just do something that is not going to be easy for you\nIf you are not writing enough high quality tests- make sure that you start and that you do these in an exemplary fashion.\nThe point here is not to tell you exactly what to do, but to let you know that you need to challenge yourself and make yourself a little uncomfortable. This, together with pair programming and merge requests, will make you better.\nBut wait, there is more… The following ideas may be more suited for an after-work practice… but who said that you can’t do them during lunch? Or maybe you have dedicated learning time at work? If there is a will there is a way!\nTry some coding challenges Completing different coding challenges is close to as pure deliberate practice as you can get. You pick a difficult challenge, you can time yourself and you get to see if you managed to complete it. Focus, improving your performance, learning, feedback.\nIf you don’t know how to get started with coding challenges, don’t worry. There range from very simple to near impossible, so I am sure you can find some that will match your level.\nI have written an article about Keeping your skills sharp with HackerRank if this topic interests you.\nLearn a new language, framework, technique etc. This is pretty straightforward- pick something new to learn. Let’s say that you decided to learn Python; how do you turn that into a deliberate practice? By building something with it as soon as possible!\nWhen you build things and share them with the world (either by showing your colleagues, open sourcing it or writing blog posts about it) you are exposing yourself to the oh so important feedback and critique.\nBuild things, release projects You should balance the time between simply learning new things and mastering things that you already know. One thing that many developers don’t do enough (yours truly included) is building and releasing our own project.\nThere is a lot of value and learning in building something difficult, using the tools we know to completion.\nSummary Just writing software for 10,000 hours will not make you a “master” developer. Like with any other discipline, the key here is deliberate practice. As you can see, you can do it at work and at your own time. See you in 10,000 hours!\n","permalink":"https://e4developer.com/posts/deliberate-practice-for-software-developers/","summary":"\u003cp\u003eYou might have heard of the 10,000-hour rule. It supposedly takes 10,000 hours of practice to become a master at something. This number will, of course, vary depending on what you are trying to master- some skills have a much higher bar than that, others may be easier. What is also important is the quality of that practice. In this blog post I will look at different ways you can carry out a deliberate practice as a developer.\u003c/p\u003e","title":"Deliberate Practice for Software Developers"},{"content":"Happy New Year to all my readers! I hope you missed these blog posts at least a little bit. I had a good rest during the festive period and feel ready to start writing and hacking again. In this article, I want to tell you about my most recent fascination- programming for Alexa (Echo) enabled devices.\nWhat is Alexa You probably know all about Alexa, but let’s make sure that we are all on the same page. Alexa is a virtual assistant developed by Amazon that can leave in different devices. Most popular of those are probably the Echo series developed by Amazon. Here is my Echo Spot:\nOk, so we have a virtual assistant. We are used to that already- with Siri, Ok Google and Cortana- what’s so cool about Alexa?…\nWhy is Alexa interesting for developers It is very easy to developer your own Alexa Skills (this is how you call programs that run on Alexa).\nFor a developer like myself, predominantly interested with the backend development… There is not much fronted to worry about here! You just focus on responding to input and… voila! You have yourself a user-facing application.\nAnother fascinating thing about Alexa development is that it is still in its early stages. That means that there are not that many skills available (yet) and it is relatively easy to stand out- think about the App Store in its infancy.\nSo it is easy to do, there is not much frontend to worry about and it is wide open for innovation in disruption… How do you get started then?\nBasics of developing Alexa skills To develop Alexa skills you effectively need three things:\nAmazon Developer Account (get it here https://developer.amazon.com) AWS Account (you could do without, but that’s much harder – get it here https://portal.aws.amazon.com/billing/signup) Alexa enabled device (you can do it without, but where is the fun in that?) Let’s have a quick look at the Amazon Developer Account and how intuitive it looks:\nEverything is done with the help of friendly user interfaces. On top of that, there are multiple example skills available for you to try on.\nOnce you configure your skill in the developer account, you should create an accompanying Lambda function (in AWS) that will service the skill. You don’t need to know much about AWS, as the tutorials explain all that you really need to know, but it can be very helpful. I have written an article about how to learn AWS if you are interested.\nDeveloping Alexa Skills – a great way of learning serverless computing So I have mentioned how knowing AWS can help you develop Alexa skills. It also works the other way round- by developing Alexa skills you will learn quite a bit about serverless computing.\nWhen creating Alexa skill, if Amazon decides to promote your skill- you may be getting thousands of users simultaneously on a moment notice. This works very well with AWS Lambda and makes Alexa skills a perfect use case for this architecture.\nIf you need to store data between user sessions- an autoscaling Dynamo DB (AWS Serverless NoSQL offering) will be perfect for you.\nIn short- if you want a real world use case for non-trivial serverless computing today- start working on Alexa skills!\nAlexa and me – my plan for 2019 I guess it is pretty clear how excited I am about Alexa and developing for the platform.\nIn 2018 I focused on writing blog post- I wrote over 100 (between this and my company blog). In 2019 I want to focus on creating customer facing projects. Alexa seems to be a perfect medium for that, where I can build both small proof of concepts/novelty programs as well as more fully-featured experiences.\nAs to my plans- I will scale down my writing to a blog post a week, but scale up my software producing.\nThis year I will aim to release 2 Alexa skills every month and write about serverless/cloud/microservices technology that makes this possible. I hope you will enjoy reading this and even learn something!\n","permalink":"https://e4developer.com/posts/alexa-say-a-happy-new-year/","summary":"\u003cp\u003eHappy New Year to all my readers! I hope you missed these blog posts at least a little bit. I had a good rest during the festive period and feel ready to start writing and \u003cem\u003ehacking\u003c/em\u003e again. In this article, I want to tell you about my most recent fascination- programming for Alexa (Echo) enabled devices.\u003c/p\u003e\n\u003ch2 id=\"what-is-alexa\"\u003eWhat is Alexa\u003c/h2\u003e\n\u003cp\u003eYou probably know all about Alexa, but let’s make sure that we are all on the same page. Alexa is a virtual assistant developed by Amazon that can leave in different devices. Most popular of those are probably the Echo series developed by Amazon. Here is my Echo Spot:\u003c/p\u003e","title":"Alexa, say a Happy New Year!"},{"content":"I wanted to write 2 blog posts a week in 2018, which would result in at least 100 blog posts in a year… This is the number 100! I could not be happier! In this centenary blog post, I look back at the journey that took me here. I will also share with you some of the ideas for this blog in 2019.\nBeginning of the blog I have started this blog in January with a short, a bit vague explanation of why. After publishing that first blog post, I quickly came up with an idea of writing at least twice a week- I had plenty to say!\nMy first blog posts were not very good. The first article that I was really happy with was Setting up RabbitMQ with Spring Cloud Stream. Highly technical, but also informative. From that moment I started to feel like I know what I am doing…\nSince everyone loves a stat, here are the first three months of my blogs traffic:\nThe most popular blog posts Some of my blog posts turned out to be very popular. You can see an automatically updated “most popular of last week” highlighted on this blog widgets. The most popular of all time are as follows:\nAnd here are the clickable links:\nIntroduction to Concurrency in Spring Boot – something I like to talk about and that I always wanted to write. Turns out that plenty of people are interested as well! Spring Boot – Best Practices – one of my personal favorites! I am very glad that the authors of the Spring framework contributed as well! How to write horrible Java – kind of a joke blog posts. Mostly popular because of social sharing. Read it for a laugh! The rise of Java Microframeworks – if you never heard of them, check it out! Getting Started with Kafka in Spring Boot – I am a bit surprised that this one is so popular! I guess Kafka is difficult and popular! Should I Learn Java in 2018 – spoiler- yes you should! Highlights of my favourite blog posts Some of the blog posts that I wrote did not become so popular, but I still really like them. Here is a short list of a few other articles that I am really happy with:\nJava surprises – Unexpected behaviours and features – I love that article. If you think you know everything about Java, check this out! Quite a few extra trivias and fun features! HATEOAS – a simple explanation – the first article that I wrote that got any sort of notice! What a great feeling, and not a bad article! Spring’s WebFlux / Reactor Parallelism and Backpressure – difficult topic explained well I believe. The Quest for Simplicity in Java Microservices – Simplicity is a virtue There are many more articles that I really enjoyed writing, but these are somehow very memorable. I wonder what kind of titles I will be listing in December 2019!\nHow much did I actually write? 100 blog posts, counting this one. I also wrote some blog posts for Scott Logic, but we are talking about e4developer here!\nHow much is 100 blog posts? This is about 92,000 words counting this blog post! This is apparently an average for a novel! Here are stats for some books:\nThe Master and Margarita (one of my favourite books) – 117,120 words The Color of Magic (Discworld, go read it!) – 60,900 words The Godfather – 136,640 words How long does it take to read 92,000 words? Apparently something like 6,5 hours. This is a good news- you can read it in a work day with a lang lunch (if for some crazy reason, you would want to do so!).\nHow long did it take me to write? Unfortunately I did not collect detailed statistics, but quite a lot! Of course some blog posts required quite a lot of programming and these took much longer, then say, book reviews… On the other hand to write a book review, I would read the book first so… Hard to say.\nAll that I can say is that I would spend 2-4 hours, 2-4 nights a week. Taking a very conservative estimates here, I think that I must have worked at least 300 hours this year on my blog… I am glad that so many of you like the content!\nWhat about the newsletter and the videos? As you might recall (if you are my frequent reader), at some point I was trying to maintain a newsletter and produce some YouTube videos. Let’s look back at these efforts.\nNewsletter – It sounds fun to be able to send monthly emails with links and short description of the blog posts. In reality it took a bit too much time in my already super busy life. I have about 300 subscribers and preparing a monthly update takes about 30 minutes. With more than 1000 daily visitors, it is just a bit much in terms of effort-outreach. I may come back to this in the future.\nVideos – I really enjoyed making the couple videos that I did! (my YouTube channel) The reality is that it takes a lot of time for someone like me (a video beginner). With two blog posts a week I did not want to slip on these deadlines in favour of videos. It seems I bite off more than I could chew. I definitely want to come back to making videos next year!\nMonetizing the blog Most bloggers at some point would like to make some pocket money from their blogs. I am no different with that. I have tried mainly three approaches and this is how it went:\nGoogle AdSense – This is quite simple to start- display some ads on your website! Sounds like easy money? Wrong! Developers simply do not click on ads… Well, what a surprise! Anyway, I prefer to have a bit more control of what I am endorsing on my website, so at the moment there are no AdSense ads on this blog.\nAmazon Affiliate – Here, I can promote things that I actually bought myself and I feel good about recommending them to others. You can find links to books under book reviews. At the moment this generates minimal amount of money (you can imagine the scale of commission on a few books a month…)\nPluralSight Affiliate – I am a huge fan of PluralSight (I wrote an article about learning with PluralSight) and a long-term subscriber, so it seemed like a perfect thing to promote on my blog. I think I must have hit on something here, as this is the only method here that actually sort of works. £20-40 a month is not a fortune, but it’s a nice tip!\nCurrent readership This was my last month in terms of readership:\nI am very happy with my views. It feels like I am actually writing for an audience!\nThis is how my year looked in terms of gaining visibility in Google:\nThe future direction I am extremely pleased with how this year played out for my blog. I did not expect to be getting about a thousand visitors a day towards the end of the year! With that in mind what are my plans for the next year? Here is a short list:\nWrite at least once a week. This will reduce the number of post, but the focus has to be quality! Write more about AWS and the Cloud Infrastructure. I will write a whole blog post explaining why this is so natural and important when thinking about microservices… Oh, wait! I already did for Scott Logic – DevOps as a key to success with the microservices approach Make some more videos for my YouTube channel Focus on interesting projects. I would like to build something remarkable. There you go! Four simple goals. If I manage to get all of them done I will be very happy!\nThank you so very much! Big thank you to everyone who reads this blog! Thank you for all the nice feedback, comments, messages on Twitter and Reddit! You have no idea how nice it is to know that people enjoy my work!\nWith that, I would like to close the 2018, spend the remaining two weeks re-charging my batteries and back to blogging in 2019!\nMerry Christmas and a Happy New Year!\nBartosz\n","permalink":"https://e4developer.com/posts/i-wrote-100-blog-posts-in-2018-how-it-went-and-whats-next/","summary":"\u003cp\u003eI wanted to write 2 blog posts a week in 2018, which would result in at least 100 blog posts in a year… This is the number 100! I could not be happier! In this centenary blog post, I look back at the journey that took me here. I will also share with you some of the ideas for this blog in 2019.\u003c/p\u003e\n\u003ch2 id=\"beginning-of-the-blog\"\u003eBeginning of the blog\u003c/h2\u003e\n\u003cp\u003eI have started this blog in January with a short, a bit vague \u003ca href=\"https://e4developer.com/posts/starting-a-blog-why/\"\u003eexplanation of why\u003c/a\u003e. After publishing that first blog post, I quickly came up with an idea of writing at least twice a week- I had plenty to say!\u003c/p\u003e","title":"I wrote 100 blog posts in 2018 - how it went and what’s next?"},{"content":"The topic of software architecture comes up often when discussing microservices. Many newcomers to microservices are not sure how to handle discussing architecture and how to make decisions. Should they bring the more traditional role of the software architect, or should everyone just do what they think makes sense? In this article, I will give you my answers to these questions and share some additional advice.\nThe high-level view of the entire system First of all, regardless of what you decide is a good approach to handling architectural decisions, you need to know what your system looks like.\nI worked on projects where the real architecture was “tribal knowledge” passed from one group of developers to another and on systems where the up-to-date high-level logical architecture diagram was always on the wall. Guess which projects ended with more efficient, sane architectures?\nIn order to really start making precise architectural decisions and refactor your system, you really need to know what you are working with. Going in “blind” it is far to easy to make mistakes and overlook side-effects and dependencies.\nMy first advice is that whatever you decide- put some effort first in creating a high-level logical architecture diagram. Ideally, you would also make some diagrams for data-flow, security and other important aspects of your system.\nWorking on diagrams may seem like a chore, so make sure you only work on those that are important and genuinely useful to the team.\nThe architecture of choices What makes the architecture of microservices systems more difficult to talk about? I believe it would be these two things:\nRapid change Many choices at every step I have even written an article describing microservices as the architecture of choices.\nWith the number of choices at every step, it is clear to me that you can’t just trust your luck and not think about these things. Also, with the rapid pace of development, it is equally clear that you should stay away from TOGAF and similar ideas. (In my humble opinion, you should stay away from TOGAF anyway, but that is for another article).\nWith these parameters, how do you approach working on the architecture for microservices systems?\nArchitectural Decision Records as a system design tool Architectural Decision Records (Homepage of the ADR GitHub organization) is:\na software design choice that addresses a functional or non-functional requirement that is architecturally significant.\nADR Homepage\nADRs are not entirely new – Michael Nygard described them already in this blog post in 2011, but I came in contact with them only in 2018. ThoughtWorks listed them as ADOPT level technology in their technology adoption radar in 2018.\nWhat is this whole idea about? There are different approaches, but it roughly boils down to:\nFor every Architectural Decision (AD) that you will take follow this process. For every AD use a template to record: Context Motivation Decision Status Consequence Combination of the above (or something else as required) These create ADR (Architectural Decision Records) Store them in a place like Git repository, Google docs, Wiki (something that works for your team) The core idea here is to keep it simple, keep it standardized, keep it accessible.\nI believe that for many teams, this is exactly what is needed. You want to know what was decided, when and why. This does not have to take much effort.\nTaking this idea a bit further and using Git- you could have architectural decisions as pull requests that get discussed by people. What a great way to get more people involved and heard!\nStability of a distributed system Because of the difficulty of building distributed systems like microservices, I recommend adding an additional, more subtle, technique at steering the overall architecture and design. Using Consumer Driven Contracts.\nConsumer driven contracts (CDCs) could easily fill a few articles on their own. I am mentioning them here, as a way of letting other systems know, what is important for your system.\nIf you are not familiar with the concept, check out https://docs.pact.io/where you can read an introduction to Pact (a popular CDC tool).\nIf you are working in a truly distributed fashion (multiple teams, multiple services), you need to find a way of letting other teams know what is architecturally significant to you and your system. One way is using common ADRs, the other is using CDCs.\nDo you need traditional software architects? I think it is clear that you need to think about software architecture when working with microservices. You also should engage in taking architectural decisions (with the help of ADRs) and evolving your APIs (safely with CDCs).\nDoes that mean that you need an architect? It depends! It clearly means that if you had one you could probably keep him quite busy. What about:\nUpdating the high-level architecture diagrams Helping create and progress ADRs Working on API design across teams Working alongside teams on more challenging problems (security etc.) I think just these four can be a full-time job on a large enough project. You don’t necessarily need someone with the title, but you definitely need people with the architecture skill.\nSummary I hope that this article gave you an idea of how to make architecture work in your microservices system. Working on microservices is more difficult than just building monoliths. That means that you not only need a good team of developers, you also need people with sharp architectural intuitions and modern, light-weight processes that work!\n","permalink":"https://e4developer.com/posts/software-architecture-in-the-world-of-microservices/","summary":"\u003cp\u003eThe topic of software architecture comes up often when discussing microservices. Many newcomers to microservices are not sure how to handle discussing architecture and how to make decisions. Should they bring the more traditional role of the software architect, or should everyone just do what they think makes sense? In this article, I will give you my answers to these questions and share some additional advice.\u003c/p\u003e\n\u003ch2 id=\"the-high-level-view-of-the-entire-system\"\u003eThe high-level view of the entire system\u003c/h2\u003e\n\u003cp\u003eFirst of all, regardless of what you decide is a good approach to handling architectural decisions, you need to know what your system looks like.\u003c/p\u003e","title":"Software architecture in the world of microservices"},{"content":"Today I want to talk to you about technical debt. This is a topic that comes up a lot and often generates some emotions. Developers often want to have a minimal amount of technical debt. Some will go through great efforts to eliminate any technical debt they see. Let me tell you what I think about it all.\nDifferent type of technical debt First of all, it is a bit simplistic to just classify all technical debt as the same thing. I have read a few attempts at classifying it- Types of Technical Debt by Agile Mike and There are 3 main types of technical debt. Here’s how to manage them. FirstMark make for interesting reading.\nThe aforementioned articles focus on looking how the debt came to be: “was it deliberate?”, “was it because of lack knowledge?” or perhaps “did something change?”. There is more to it and if you are interested, you can read the mentioned articles. In here I want to focus on a bit different classification. We are going to ask – “What is this technical debt doing to my system?”\nPotential bugs – Some tech debt is often a source of potential bugs. Haphazardly handled multithreading issues, data transformation algorithms that take generous assumptions about the data quality etc. This kind of debt often manifests itself when bugs are detected and graduates to the level of a system failure that needs to be fixed.\nPotential security issues – How many times did you take a shortcut when dealing with security concerns? I hope the answer is not many! I often see these potential security issues referred to as tech debt. On whoever side you land in this argument, I am sure that you will agree that it is different than your standard technical debt.\nDevelopment impediments – This is what most commonly is classified as technical debt. Things that make the development of the system more difficult than it could be. Here we can talk about the planned technical debt, accidental technical debt etc.\nOperational impediments – In the modern large-scale system development (microservices and the like), we often have blurred lines between what is operations/infrastructure code and what is the traditional system source code. Some decision (for example, how the configuration is handled etc.) can be a technical debt hitting both the operation and the development of the system.\nCode aesthetics problems – These are other classical tech debt issues, but the ones that do not directly make the development of the system much harder. Slightly wrong names patterns not used properly in code that has not been changed in a long time and it works correctly etc. The sort of tech debt that you need to look for to find.\nI don’t want to tell you which technical debt is the most important to you. It depends on your system and your goals. It could be the security aspect or the operational aspect. What is probably always true is that some type of technical debt will cost you a lot, while others will be of lower priority.\nWith this classification and different pricing in mind let’s look at the metaphor once again.\nThe technical debt metaphor I really like the technical debt metaphor. It is linked to financial debt on which you need to pay interest as it accumulates… What I really like is that we can take this metaphor a bit further.\nDifferent technical debt has different interests. Like in the financial world, not all debt is created equal. You want to pay off the highest interest debt first. This underlines the importance of knowing which debt matters for your system the most.\nIt is healthy to take on some debt. As you may know, most companies in the world have certain levels of debt, and this is considered healthy. You can use debt to fuel your growth (just don’t overdo it). In a software system, if you have absolutely zero technical debt, you can argue that you have spent too much making future development easier (which at one point will stop).\nLike in the world of finance and companies, managing technical debt is a challenging and non-trivial task.\nFinding time to fix technical debt What do you do when your product owner (or your manager) gives you no time to fix any of your technical debt? Depending on your level of control of what you are working on, the blame could be on your side!\nIf you are fixing the technical debt, you should reasonably expect rewards of these kinds:\nMore time saved from fixing the technical debt, than it took to fix it (otherwise why do it?) Prevent bugs from happening that would take more time to fix (look at the point above) Prevent security incidents that could have a serious ramification (highly system dependent, you need to judge here) Save time on operations and maintaining the system (greater than the investment in fixing the debt) Feeling good about yourself and the codebase (this is also important!) So, how do you know what is worth doing? It comes down to experience and some educated guesswork- who said it was easy?\nIf you can deliver more by doing a bit of tech debt fixing, you can arm yourself in some estimates and good arguments and you can win back the time from your product owner/manager. If they don’t need to know that level of details- do what delivers them working and maintainable system in the fastest way- that includes fixing some tech debt!\nThe sensible approach What is the sensible approach then? It depends, but I can give you some guidance:\nMake sure to classify your tech debt into the high-interest and low-interest one Prioritize tech debt that is quick to fix and gives good pay-off Work on tech debt in the most active areas of the code – the pay-off will be greater Don’t aim for the 100%, you won’t be doing the system a favour Is it ever a good idea to completely stop working on new features to work on some tech debt? If the maths says that you will deliver faster and higher quality overall, it is the right call (give than you can pull it off with your management). LinkedIn did something like that with their project InVersion described in When Your Tech Debt Comes Due by their CTO Kevin Scott, definitely give it a read!\nSummary I hope this article will make you reflect a bit more on the subtleties of managing tech debt in your project. If you like to discuss it, you can catch me on twitter. If you would like to read more about tech debt, I also wrote an article aboutCommon Technical Debt in Microservices.\n","permalink":"https://e4developer.com/posts/having-just-the-right-amount-of-technical-debt/","summary":"\u003cp\u003eToday I want to talk to you about technical debt. This is a topic that comes up a lot and often generates some emotions. Developers often want to have a minimal amount of technical debt. Some will go through great efforts to eliminate any technical debt they see. Let me tell you what I think about it all.\u003c/p\u003e\n\u003ch2 id=\"different-type-of-technical-debt\"\u003eDifferent type of technical debt\u003c/h2\u003e\n\u003cp\u003eFirst of all, it is a bit simplistic to just classify all technical debt as the same thing. I have read a few attempts at classifying it- \u003ca href=\"https://agilemichaeldougherty.wordpress.com/2015/07/24/types-of-technical-debt/\"\u003eTypes of Technical Debt\u003c/a\u003e by Agile Mike and \u003ca href=\"https://hackernoon.com/there-are-3-main-types-of-technical-debt-heres-how-to-manage-them-4a3328a4c50c\"\u003eThere are 3 main types of technical debt. Here’s how to manage them.\u003c/a\u003e FirstMark make for interesting reading.\u003c/p\u003e","title":"Having just the right amount of technical debt"},{"content":"Machine Learning and its applications are getting more popular every day. For many developers, building a machine learning powered application seems like a daunting task- all this learning, data collection, and computing power needed! In reality, it is not as difficult as it sounds- as long as you harness “the power of the Cloud”…\nIn this article, I want to tell you about different machine learning services provided by AWS and give some ideas about how you could use them! Ready to get inspired? Let’s go!\nAmazon Comprehend Amazon Comprehend is a natural language processing and text analytics service. It allows you to send text and receive information about:\nWhat is the sentiment – positive, negative or neutral? What is this text (or who) talking about? What language is it written in? Analyze the syntax And a few extra other tricks I wrote my master thesis about sentiment analysis in text and this tool basically gives you the best in class capability for a really cheap price! It is unbelievable how much this field advanced in just 7 years.\nI mentioned the price- at the time of writing, you get 50,000 queries for free (a month) and after that, it is $0.0001 per unit of text!\nThere are many fascinating uses of this service and I am planning on using it with Reddit (hence my Reddit API authorization article) and after that, to look at this blog sentiment. Who knows- maybe we discover something interesting!\nAmazon Rekognition If text analysis is not impressive enough for you- meet Amazon Rekognition. With Amazon Rekognition, you can process videos and images looking for:\nFacial recognition Facial analysis (different elements of the face, smiling or not etc.) Different objects and activities recognition Text in image and more This is again available for you for a reasonable price and some free level of usage.\nThis is something that I think will be used more and more by developers around the world. Complex image analysis until recently was out of reach for most independent developers and hobbyists.\nSome of the ideas for using this service are:\nSearchable video library Searchable photo library Sentiment analysis (maybe even mixing it with Comprehend?) Image moderation Other machine learning usages Definitely check out the Rekognition AWS page for more ideas.\nAmazon Transcribe Amazon Transcribe is all about recognizing speech in recorded audio files. Perhaps it sounds simpler than working with text sentiment and video/image analysis, but it may be even more practical!\nYou have the following features when working with audio files:\nUsing custom vocabulary to improve the accuracy Speaker identification Generating timestamps for each word Works even with lower quality audio There is an ocean of data there, that is not being explored since the data is “spoken word”. Imagine if you had all that speech in text form? Amazon Transcribe gives you the power to do just that.\nOther interesting services For me, Comprehend, Rekognition and Transcribe are the most interesting “Machine Learning powered” services that AWS has to offer. This does not mean that there isn’t anything else available! Here is the short list of the remaining services:\nAmazon SageMaker – “Build, train, and deploy machine learning models at scale” Amazon Lex – “Conversational interfaces for your applications powered by the same deep learning technologies as Alexa” Machine Learning – “Machine learning in the hands of every developer and data scientist” Amazon Polly – “Turn text into lifelike speech using deep learning” Amazon Translate – “Natural and accurate language translation” Summary The power is in your hands. Developers can’t claim that the “true machine learning” is outside of their reach as individuals. I am very excited with the power that AWS gives developers (as well as Azure and Google offer in their respective clouds).\nYou can use these services without the need to go fully on the cloud. However, if you want to get some advice on How to learn AWS, you can read my other article.\n","permalink":"https://e4developer.com/posts/making-your-machine-learning-idea-real-with-aws/","summary":"\u003cp\u003eMachine Learning and its applications are getting more popular every day. For many developers, building a machine learning powered application seems like a daunting task- all this learning, data collection, and computing power needed! In reality, it is not as difficult as it sounds- as long as you harness \u003cem\u003e“the power of the Cloud”…\u003c/em\u003e\u003c/p\u003e\n\u003cp\u003eIn this article, I want to tell you about different machine learning services provided by AWS and give some ideas about how you could use them! Ready to get inspired? Let’s go!\u003c/p\u003e","title":"Making your machine learning idea real with AWS"},{"content":"I am a big fan of Reddit. The platform is great for learning and sharing programming knowledge… In fact, it contains so much knowledge and opinion, that there is no chance for any single person to analyze it all. Sounds like a job for a machine? Before we get started, we need to learn how to authenticate with the Reddit API.\nPublic read-only API with JSON Reddit has a very friendly API, with multiple endpoints being simply accessible in a JSON format after adding .json to the request. For example to get a list of Java topics discussed on the /r/Java subreddit, as a human you would go to https://www.reddit.com/r/java and you would see something like that:\nAnd by simply adding .json, we can transform the URL into https://www.reddit.com/r/java/.json and see the following:\nThis is great! If the only thing you want to do with your script/program is to read some article headers and comments- Reddit makes it incredibly easy.\nBut what if you want to be accessing parts of the API that are not publicly viewable, or if you want to actually login “as a user” and interact with Reddit programmatically?\nOAuth with Reddit, Java, and Spring Before showing you my code, I want to point you to a few official resources that you are likely to find very helpful when working with Reddit API:\nhttps://github.com/reddit-archive/reddit/wiki/OAuth2 – OAuth2 explanation of different flows and setting up your application up. This was the main resource I used for figuring it out.\nhttps://www.reddit.com/dev/api – Reddit API documentation\nStep 1 – Creating a Reddit Application This is already explained in https://github.com/reddit-archive/reddit/wiki/OAuth2, but to give you an express version:\nGo tohttps://www.reddit.com/prefs/apps Click on the: “are you a developer? create an app…” Fill in the form: You should end up with something like:Don’t worry, I have deleted this app already, so the highlighted confidential parts will no longer work. These are your id and secret though and we will use them later. Step 2 – getting an Access Token There are two main ways that I want to authenticate with Reddit OAuth:\nBased on the client_credentials – for making mostly read-only calls that do not require my username and password Based on my username and password – for using the API with write as a user capabilities The main difference is how you get the Access Token in each case. Let’s start with the client_credentials only. To get that token, you will need to provide:\nYour app-id (“Ljxr62Lv73T32g” in this example) and secret (“d-avWGrN_BTGCW_rCU5rU_gJBf4” in this example). User-Agentheader to identify your application. This is extremely important as without this the application simply won’t work! Here is an example code for getting an Access Code with the client_credentials grant. I am using Spring Boot 2.0 and Jackson dependencies for JSON:\nprivate String getAuthToken(){ RestTemplate restTemplate = new RestTemplate(); HttpHeaders headers = new HttpHeaders(); headers.setBasicAuth(\u0026#34;Ljxr62Lv73T32g\u0026#34;, \u0026#34;d-avWGrN_BTGCW_rCU5rU_gJBf4\u0026#34;); headers.setContentType(MediaType.APPLICATION_FORM_URLENCODED); headers.put(\u0026#34;User-Agent\u0026#34;, Collections.singletonList(\u0026#34;tomcat:com.e4developer.e4reddit-test:v1.0 (by /u/bartoszjd)\u0026#34;)); String body = \u0026#34;grant_type=client_credentials\u0026#34;; HttpEntity\u0026lt;String\u0026gt; request = new HttpEntity\u0026lt;\u0026gt;(body, headers); String authUrl = \u0026#34;https://www.reddit.com/api/v1/access_token\u0026#34;; ResponseEntity\u0026lt;String\u0026gt; response = restTemplate.postForEntity( authUrl, request, String.class); ObjectMapper mapper = new ObjectMapper(); Map\u0026lt;String, Object\u0026gt; map = new HashMap\u0026lt;\u0026gt;(); try { map.putAll(mapper .readValue(response.getBody(), new TypeReference\u0026lt;Map\u0026lt;String,Object\u0026gt;\u0026gt;(){})); } catch (IOException e) { e.printStackTrace(); } System.out.println(response.getBody()); return String.valueOf(map.get(\u0026#34;access_token\u0026#34;)); } The second way of getting authentication is with the password grant. It works the same way, you just need to also submit your username, password and change the grant type. You also have to create your application as a Script:\nWith all that prepared here is an example code for getting an Access Code with the password grant. I am using Spring Boot 2.0 and Jackson dependencies for JSON:\nprivate String getAuthToken(){ RestTemplate restTemplate = new RestTemplate(); HttpHeaders headers = new HttpHeaders(); //Different login details as I had to re-create the app headers.setBasicAuth(\u0026#34;RvXWoa0lPAYaQw\u0026#34;, \u0026#34;s0DWeNK6-61UMOJ-KG3QQ0N-GWQ\u0026#34;); headers.setContentType(MediaType.APPLICATION_FORM_URLENCODED); headers.put(\u0026#34;User-Agent\u0026#34;, Collections.singletonList(\u0026#34;tomcat:com.e4developer.e4reddit-test:v1.0 (by /u/bartoszjd)\u0026#34;)); String body = \u0026#34;grant_type=password\u0026amp;username=bartoszjd\u0026amp;password=thisissecret\u0026#34;; HttpEntity\u0026lt;String\u0026gt; request = new HttpEntity\u0026lt;\u0026gt;(body, headers); String authUrl = \u0026#34;https://www.reddit.com/api/v1/access_token\u0026#34;; ResponseEntity\u0026lt;String\u0026gt; response = restTemplate.postForEntity( authUrl, request, String.class); ObjectMapper mapper = new ObjectMapper(); Map\u0026lt;String, Object\u0026gt; map = new HashMap\u0026lt;\u0026gt;(); try { map.putAll(mapper .readValue(response.getBody(), new TypeReference\u0026lt;Map\u0026lt;String,Object\u0026gt;\u0026gt;(){})); } catch (IOException e) { e.printStackTrace(); } System.out.println(response.getBody()); return String.valueOf(map.get(\u0026#34;access_token\u0026#34;)); } Step 3 -using the API Once you have the Access Token, using the API is very simple. Before doing that, please make sure that you familiarise yourself with the Reddit API rules.\nMaking the call to the API requires you to set up the User-Agent and use the Bearer token authentication is Spring. Here is an example code that will retrieve the hot-topics in a specified subreddit:\npublic String readArticles(String subReddit) { RestTemplate restTemplate = new RestTemplate(); HttpHeaders headers = new HttpHeaders(); String authToken = getAuthToken(); headers.setBearerAuth(authToken); headers.put(\u0026#34;User-Agent\u0026#34;, Collections.singletonList(\u0026#34;tomcat:com.e4developer.e4reddit-test:v1.0 (by /u/bartoszjd)\u0026#34;)); HttpEntity\u0026lt;String\u0026gt; entity = new HttpEntity\u0026lt;String\u0026gt;(\u0026#34;parameters\u0026#34;, headers); String url = \u0026#34;https://oauth.reddit.com/r/\u0026#34;+subReddit+\u0026#34;/hot\u0026#34;; ResponseEntity\u0026lt;String\u0026gt; response = restTemplate.exchange(url, HttpMethod.GET, entity, String.class); return response.getBody(); } And here is the outcome in the browser:\nWhat is next? This is the first step in my exploration of the Reddit API. I have been recently learning a lot about AWS and I discovered a service called Amazon Comprehend. It is a fascinating Sentiment Analysis API that I am planning to use with Reddit! Stay tuned for more!\n","permalink":"https://e4developer.com/posts/reddit-api-authentication-with-java-spring/","summary":"\u003cp\u003eI am a \u003ca href=\"https://e4developer.com/posts/reddit-the-java-goldmine/\"\u003ebig fan of Reddit\u003c/a\u003e. The platform is great for learning and sharing programming knowledge… In fact, it contains so much knowledge and opinion, that there is no chance for any single person to analyze it all. Sounds like a job for a machine? Before we get started, we need to learn how to authenticate with the Reddit API.\u003c/p\u003e\n\u003ch2 id=\"public-read-only-api-with-json\"\u003ePublic read-only API with JSON\u003c/h2\u003e\n\u003cp\u003eReddit has a very friendly API, with multiple endpoints being simply accessible in a JSON format after adding \u003cem\u003e.json\u003c/em\u003e to the request. For example to get a list of Java topics discussed on the /r/Java subreddit, as a human you would go to \u003ca href=\"https://www.reddit.com/r/java\"\u003ehttps://www.reddit.com/r/java\u003c/a\u003e and you would see something like that:\u003c/p\u003e","title":"Reddit API Authentication with Java/Spring"},{"content":"Java is a very mature programming language – in fact, it is over 21 years old, so if it was a person it could drink even in the USA! With age comes wisdom, but also with age comes quirkiness… at least sometimes. In this article, I will look at some of the more surprising and unexpected behavior and features of the language.\nHere we go, in no particular order, a collection of Java surprises to amuse you and impress your friends!\nJava has goto and const keywords While Java does not have goto it does reserve it as a keyword. The same is true for const. All it means is that you can’t name your variables using these names:\nint goto = 0; int const = 0; is both illegal and won’t compile!\nFormatting numbers with _ Java lets you use the _ character for padding out your numbers. Hence, you can write numeric values like this:\nint thousand = 1_000; double bigValue = 1_000_000.456_555; long thisIsSilly = 3______4__3; Double.MIN_VALUE is not what many assume So, Double.MAX_VALUE works pretty much as expected, giving you the value of: 1.7976931348623157E308. What do you think Double.MIN_VALUE gives you then? 4.9E-324! Ok, for a start- this value is greater than 0!\nDouble.MIN_VALUE returns the smallest Double value that is greater than 0. If you want the smallest Double value, you need to go with: -Double.MAX_VALUE. They really could name these things a bit better. I wonder how many bugs this caused!\nFun with Integer equality Speaking of bugs… Let me show you something really disturbing:\nInteger ten = Integer.parseInt(\u0026#34;10\u0026#34;); System.out.println(ten == Integer.valueOf(10)); //this is true Integer thousand = Integer.parseInt(\u0026#34;1000\u0026#34;); System.out.println(thousand == Integer.valueOf(1000)); //this is false Turns our that Integer objects are cached for values from -128 to 127. This means that when operating in this range, the == comparison will mostly work correctly. When going above it though- all bets are off!\nImaging, you could even write unit tests and all is good, as long as you are not using big enough numbers. This can cause serious bugs, so just to be safe- a reminder: When working with objects always, use .equals() rather than relying on == equality, unless you know for sure this is the right thing to do.\nReflection lets you do (almost) anything This should not come as a surprise, but with reflection, you can override final values (most of the time) and access private fields… But not always.\nWhen writing my How to write horrible Java I found a case where overwriting final values does not work as expected. Constants in Java, when final will get inlined and even though your code will seem to have worked- no value will change. Magic (check my article for details and this Stack Overflow answer).\nHere is the code for overwriting finals if you insist:\npublic static void notSoFinal() throws NoSuchFieldException, IllegalAccessException, InterruptedException { ExampleClass example = new ExampleClass(10); System.out.println(\u0026#34;Final value was: \u0026#34;+ example.finalValue); Field f = example.getClass().getDeclaredField(\u0026#34;finalValue\u0026#34;); Field modifiersField = Field.class.getDeclaredField(\u0026#34;modifiers\u0026#34;); modifiersField.setAccessible(true); modifiersField.setInt(f, f.getModifiers() \u0026amp; ~Modifier.FINAL); f.setInt(example, 77); System.out.println(\u0026#34;Final value was: \u0026#34;+ example.finalValue); } Did you know we have labels in Java? Ok, we depart the naughty land and we are back in the good old correct Java. Did you know that we have labels for our loops? Have a look:\nouterLoop: while (true) { System.out.println(\u0026#34;I\u0026#39;m the outer loop\u0026#34;); while(true){ System.out.println(\u0026#34;I am the inner loop\u0026#34;); break outerLoop; } } Using labels lets you continue or break a specific loop when dealing with nested loops… Kind of like goto would in a different language.\nThis let’s write a very suspicious looking code that compiles fine:\nint i = 3; http://www.e4developer.com while(i \u0026gt; 0){ System.out.println(\u0026#34;http://www.e4developer.com\u0026#34;); i--; } It compiles and works fine since it is simply a loop labeled http: with a comment attached to it. Makes for an interesting puzzle for those not familiar with labels!\nEnums are classes Ok, you probably know about that, but it bears repeating. Enums are special classes that have a limited number of instances. That means that enums can:\nImplement interfaces Have constructors Implement different methods I wrote an article for Scott Logic blog called Java Enums – how to use them smarter where I show some other neat usage ideas.\nFor loops are quite flexible The standard for loop, I am sure that you used them more times than you can remember:\nfor(int i = 0; i \u0026lt; 100; i++){ //... } Did you know that all parts are optional? You don’t need to initialize a variable, you don’t need a conditional stop and you don’t need to increment anything… If you omit everything you end up with an interesting syntax for an infinite loop:\nfor(;;){ //Infinite loop! } Java has initializers… Mentioning just in case… Ok, this is a fairly popular feature, yet I still meet experienced Java developers who are not really aware that it exists. In Java, you can write blocks of code that run either on the class load (static initializers) or just before the constructor (standard initializers). It goes like this.\nNormal initializer:\nint sum = 0; { for(int i = 0; i \u0026lt; 1; i++){ sum += 1; } } Static initializer:\nstatic double value = 0; static { for(int i = 0; i \u0026lt; 1; i++){ value += 1; } } Just remember to put these blocks inside the class, but not inside any methods or a constructor.\nDouble braces initialization of collections While on the topic of initializing things, I will show you a surprising way to initialize collections in Java:\nMap\u0026lt;String, String\u0026gt; map = new HashMap\u0026lt;String, String\u0026gt;() {{ put(\u0026#34;it\u0026#34;, \u0026#34;really\u0026#34;); put(\u0026#34;works\u0026#34;, \u0026#34;!\u0026#34;); }}; Set\u0026lt;String\u0026gt; set = new HashSet\u0026lt;String\u0026gt;() {{ add(\u0026#34;It\u0026#34;); add(\u0026#34;works\u0026#34;); add(\u0026#34;with\u0026#34;); add(\u0026#34;other\u0026#34;); add(\u0026#34;collections\u0026#34;); add(\u0026#34;too\u0026#34;); }}; It is called double brace initialization in Java and I have never seen it used by anyone… Is it because hardly anyone knows about it?\n…after publishing this article many readers were quick to let me know that this is a dangerous feature that should be avoided! Use the helpers methods like List.of() instead.\nFinal value initialization can be postponed It is a small thing, but some people assume that you have to initialize final values as you declare them. This is not the case. You just need to make sure that you initialize them only once. Check this valid code:\nfinal int a; if(someCondition){ a = 1; } else { a = 2; } System.out.println(a); This can get quite tricky when we mix in initializer blocks and other constructs.\nJoint union for extending generics Despite a suspicious implementation (type erasure), generics are still quite powerful in Java. I was surprised that we are allowed to be very specific about the type of Generic we require. Have a look at this example:\npublic class SomeClass\u0026lt;T extends ClassA \u0026amp; InterfaceB \u0026amp; InterfaceC\u0026gt; {} It can be quite useful when you are fussy about your T!\nDo you have more? I hope you enjoyed my selection of Java trivia and curiosities. If you know other surprising features and behaviors that are worth sharing, be sure to let me know in the comments or on Twitter!\n","permalink":"https://e4developer.com/posts/java-surprises-unexpected-behaviours-and-features/","summary":"\u003cp\u003eJava is a very mature programming language – in fact, it is over 21 years old, so if it was a person it could drink even in the USA! With age comes wisdom, but also with age comes quirkiness… at least sometimes. In this article, I will look at some of the more surprising and unexpected behavior and features of the language.\u003c/p\u003e\n\u003cp\u003eHere we go, in no particular order, a collection of Java surprises to amuse you and impress your friends!\u003c/p\u003e","title":"Java surprises - Unexpected behaviours and features"},{"content":"Words like leadership and management are used often when discussing software projects. While they may sound similar, they are quite different and are often (but not always) performed by separate people. In this article, I will look closer at these two terms and explain why one is more difficult than the other.\nLeadership defined You could try to define leadership along these lines:\nInspiring and motivating people to act towards achieving positive goals. They also protect and nurture the teams and individuals.\nMy attempt at defining leadership\nLeaders attempt to inspire and motivate. I would consider these high-level values key criteria for calling anyone a leader (software or not). Protecting the team and individuals is how you keep the leader status (and the team intact) and nurturing is all about growing future leaders and helping your people flourish.\nIn the context of software development, some typical actions of a leader would be:\nTaking on a difficult piece of work (protecting) Teaching/mentoring less experienced colleagues (nurturing) Fostering positive team spirit (motivating) Giving an example with your conduct and quality of work (inspiring) These are just some examples, and I am sure that you can think of many more.\nOften, the leaders in software development teams do not have an official line-management responsibility. Sometimes, they do not even have a fancy title. You can start leading before anyone officially calls you a leader.\nA good leader can really elevate their team, not only making work happen quicker and be of higher quality but also leaving a lasting positive impact on the people they worked with.\nManagement defined What is management then? Let’s go with a dictionary definition:\nThe process of dealing with or controlling things or people.\nGoogle dictionary\nWhile leadership deals with indirect control, management is about directly controlling people.\nManagers will pretty much always have official titles and often have line-management responsibilities.\nIn the context of software development, managers often deal with things like:\nChoosing which projects the team will work on Organizing work (although in most Agile methodologies, the team should be doing that) Promote and hire people Review progress etc. These things are very important, and it is important that competent people take care of these things. In reality, though, they don’t impact the quality of the end product as much as many people imagine. In the end, the delivery rests with the team, and while the team can be managed, to really excel it needs good leaders.\nBecoming a good leader The leaders that I mentioned – anybody can become a leader, as long as they do what is required (motivate, inspire, nurture and protect). You can be a leader as:\nA project manager – you are already a manager and you have “management power”, you can use it to protect the team, educate them about the business realities and foster a good atmosphere A new developer – maybe you know technologies that others on the team are not yet familiar with? You are willing to help others and are quick to learn. You never say no when somebody needs your help? Servant leadership is very powerful. A guy with the title “Lead Developer” – live up to your title. Help others, do the difficult work. Invite others to pair-program with you. Share your knowledge and experience. Give feedback tactfully. You get the idea. This is not about being the one guy that shouts- I am the number one. It is quite the opposite in fact. It is about the team and being there for the team.\nLet’s look at more specific aspects of leadership that can be difficult to get right.\nHow to motivate? Motivating people is a fascinating topic. I have written an article titled Secrets to Highly Motivated and Happy Software Teams and if you want the details, go ahead and read it. The short version is here.\nMotivation is all about giving people three things:\nAutonomy – ability to choose their own course towards achieving a goal Mastery – an urge to improve, to get better at something that matters **Purpose –**working towards something that matters Of course, it is also hard to be motivated if you are working in a toxic environment. Creating a positive and safe environment is a key element of the puzzle.\nHow to inspire? Inspiring people to be their best is to lead by example. I see the key elements of that as:\nDoing the work yourself. You need to have some credibility Sticking to your principles and values Staying positive and resourceful, especially when going gets tough Going sometimes above and beyond the call of duty It is hard to give a simple formula for inspiration, as this will be about your own strengths. Seeing someone that is professional works hard and helps other makes other want to do the same.\nHow to nurture? One of the often overlooked aspects of leadership is nurturing others. Helping others to grow is key to creating a long-lasting impact with your leadership. Good examples of this would be:\nHelping people learn by sharing your experience. Providing valuable, sometimes even negative feedback when necessary. Providing negative feedback tactfully and effective can be difficult in itself. Recognizing other’s potential and helping them see it. Helping others become leaders This is not a zero-sum game. Helping others to grow will only make your work-life better. Who does not want to work with an amazing team? Stop dreaming and help them become amazing!\nDo we need management? With all this raving about leadership, I might have given an impression that we don’t need management. You need management to run a company, but don’t look to management to fix your development problems! They can help you remove problems and empower your team, but the quality of the delivery is in the hands of the delivery team.\nThe best solution is working towards a trust-based relationship between the delivery team, their leaders and management. The managers control what has to be controlled and the leaders (that could be the whole team) lead the delivery.\nFurther reading On the subject of leadership I recommend checking out:\nSecrets to Highly Motivated and Happy Software Teams Soft Skills for Software Developer – my article on Scott Logic blog Leaders Eat Last (Amazon) – the book Drive: The Surprising Truth About What Motivates Us (Amazon) – the book ","permalink":"https://e4developer.com/posts/leading-developers-vs-managing-them/","summary":"\u003cp\u003eWords like leadership and management are used often when discussing software projects. While they may sound similar, they are quite different and are often (but not always) performed by separate people. In this article, I will look closer at these two terms and explain why one is more difficult than the other.\u003c/p\u003e\n\u003ch2 id=\"leadership-defined\"\u003eLeadership defined\u003c/h2\u003e\n\u003cp\u003eYou could try to define leadership along these lines:\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003eInspiring and motivating people to act towards achieving positive goals. They also protect and nurture the teams and individuals.\u003c/p\u003e","title":"Leading developers vs managing them"},{"content":"Amazon Web Services (AWS) is the most popular Cloud solution out there. More and more companies are using it every day. It makes development easier, safer, cheaper and better. Since it is becoming an expectation for backend developers to be familiar with AWS (or other Cloud solutions) I compiled here some of the best resources and ideas for learning it.\nWhat does it mean to learn AWS? Before learning AWS, it is worth thinking what you want to get out of that learning. Are you more interested in designing solutions or actually working with some of the cloud services provided? AWS recognizes three key ways of learning AWS:\nSolution Architect perspective – focused around understanding cloud-based system design and architecture Developer perspective – focused around developing services that live in the cloud SysOps perspective – focused around maintaining, securing and deploying cloud solutions I recommend taking the Solution Architect view first, as it will give you a broader understanding of what AWS has to offer. Once you know what you have at your disposal, you can go deeper in areas that interest you.\nOk, so how do you get started?\nGet a free AWS account AWS offers a very generous free tier account that you can use for a year! On top of that, you get plenty of services that will always stay free. Just visit the link: https://aws.amazon.com/free/ and register!\nStart playing with AWS Once you get that free account, you don’t have to wait for anything, you can jump right in and start experimenting. AWS offers plenty of quick tutorials and guides that will start you up with:\nHosting your website Deploying your application Dipping your feet into the serverless world Starting with machine learning Much more Just check all the guides that they have compiled here: https://aws.amazon.com/getting-started/\nSign up for Online Courses It is important to get hands on quickly, but with the overwhelming amount of resources, how do you make sure that you are actually getting a comprehensive understanding of what is important?\nIt is no secret that I am a big fan of online video courses and that I have learned a ton from them. When it comes to AWS there are two (or three, depending on how you count) quality resources that I would like to recommend you:\nAWS with Pluralsight Pluralsight is an amazing resource that I use for just about everything. You can find multiple highly rated AWS courses there (over 100 if you have eternity to spend):\nOf course, I did not have time to check them all out, but the one I would recommend is:\nIt gives a good overview of the whole platform and will help you prepare for the AWS Solution Architect Associate exam if you want to take it.\nAWS with ACloudGuru An absolutely amazing resource if you want to learn cloud-related technologies is ACloud.Guru. It is similar to PluralSight, but with laser focus on cloud technologies.\nWhat is great about ACloudGuru is that their offer some of their courses on Udemy and if you are interested in taking only a single course, it may end up much cheaper for you! Here is the course that I took preparing for AWS Solution Architect Associate certification:\nIt has over 20 hours of top quality material that gets regularly updated.\nGet AWS Certification I have always been skeptical of getting certifications. I have never bothered with Java or anything like that since I did not see the point. With AWS it is a little bit different, as learning for certification gives you a clear idea of what you should be learning.\nIf you do not wish to get certified it is still very useful to take courses oriented towards passing the certification as these will give you a good theme and a well-rounded understanding.\nAfter thinking about the advantages and disadvantages of the certification I ultimately decided to go for it and I will be sitting my exam in November. After all the learning, why not get something to show for it? While most certification carries little value, employers and prospective clients seem to highly value the certificates that AWS issues!\nSummary I hope this article gave you a clear idea of how to start towards learning that amazing platform that is AWS. To recap:\nGet a free tier account Start playing around Check out some online courses if you are hungry for more Get certified if you got hooked! I consider cloud a perfect match for microservices and modern development, so this is not the last time you will read about AWS on this blog. Till next time!\n","permalink":"https://e4developer.com/posts/how-to-learn-aws/","summary":"\u003cp\u003eAmazon Web Services (AWS) is the most popular Cloud solution out there. More and more companies are using it every day. It makes development easier, safer, cheaper and better. Since it is becoming an expectation for backend developers to be familiar with AWS (or other Cloud solutions) I compiled here some of the best resources and ideas for learning it.\u003c/p\u003e\n\u003ch2 id=\"what-does-it-mean-to-learn-aws\"\u003eWhat does it mean to learn AWS?\u003c/h2\u003e\n\u003cp\u003eBefore learning AWS, it is worth thinking what you want to get out of that learning. Are you more interested in designing solutions or actually working with some of the cloud services provided? AWS recognizes three key ways of learning AWS:\u003c/p\u003e","title":"How to learn AWS"},{"content":"Among many OOP design patterns described, the one that influenced my development the most is the Strategy Pattern. In this article, I will briefly explain what the Strategy Pattern is and why it is so important.\nStrategy Pattern Defined The idea behind the Strategy Pattern is as follows:\nImagine that you have SomeClass that needs to implement varying behaviour depending on a situation. You will implement it via a composition, creating a Strategy interface that would encapsulate this varying behaviour. The interface will have only one method (ie. execute) that runs when the strategy is used. It will look something like this:\nUsing Strategy Pattern – Examples Ok, so where can you use the Strategy Pattern? Nearly everywhere it turns out! I like it so much since it is the main way of following the Composition over Inheritance principle.\nLet’s look at some scenarios:\nYour class writes output. You can provide a different Writers in order to write to a file or standard output. This Writer interface becomes your Strategy. I have used it in the past to provide different Algorithms in a financial optimisation scenario. This pattern is also being used whenever you use something like a Comparator for sorting lists in Java. The pattern works equally well for trivial and more complex tasks.\nStrategy Pattern Java Example To make it even clearer I will show you a simple example of the pattern in Java. I will implement a Parrot class that can repeat your text either very loud or quiet, depending on the strategy.\npublic class Parrot { private final ParrotStrategy parrotStrategy; public Parrot(ParrotStrategy parrotStrategy) { this.parrotStrategy = parrotStrategy; } void repeat(String text){ parrotStrategy.repeat(text); } } public interface ParrotStrategy { void repeat(String text); } public class LoudParrotStrategy implements ParrotStrategy { @Override public void repeat(String text) { System.out.println (text.toUpperCase()+\u0026#34;!!!!!!\u0026#34;); } } public class QuietParrotStrategy implements ParrotStrategy{ @Override public void repeat(String text) { System.out.println (text.toLowerCase().replace(\u0026#34;!\u0026#34;, \u0026#34;\u0026#34;)); } } This can be used like that:\npublic class Main { public static void main(String[] args){ Parrot loudParrot = new Parrot(new LoudParrotStrategy()); Parrot quietParrot = new Parrot(new QuietParrotStrategy()); loudParrot.repeat(\u0026#34;Wake up!\u0026#34;); quietParrot.repeat(\u0026#34;Good Morning!\u0026#34;); } } With the output:\nWAKE UP!!!!!!! good morning Is Strategy pattern the same as using lambdas in Java? When you think about Strategy Pattern on the conceptual level, it is pretty much exactly the same as using lambdas! The only real difference is that the Strategies can be later easier reused in different classes.\nYou can also see the Strategy Pattern when you look at Java 8 with its @FunctionalInterface. It provides a sort-of generalised Strategy pattern for Lambdas. I really recommend the interesting article on Baeldung titledFunctional Interfaces in Java 8 that talks more about them.\nWhy is Strategy Pattern so important? Strategy Pattern is simple and widely used, that already makes it important. There is more to it though. It helps you achieve the Open Close Principle:\n“software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification”\nMeyer, Bertrand (1988). Object-Oriented Software Construction.\nThis is one of the most important of the SOLID principles (according to Uncle Bob at least). With the Strategy Pattern, you can easily separate the bit of your class that is subject to change and encapsulate it.\nWhenever you are tempted to start building inheritance hierarchies stop and think if the Strategy Pattern would not have solved the problem better.\nSummary I am sure that we all have seen the Strategy Pattern in action in the past. You might have even been using it without knowing the name. I found that learning about it explicitly helped me spot chances to use it quickly and ultimately write cleaner and better code.\n","permalink":"https://e4developer.com/posts/my-favourite-design-pattern-strategy/","summary":"\u003cp\u003eAmong many OOP design patterns described, the one that influenced my development the most is the Strategy Pattern. In this article, I will briefly explain what the Strategy Pattern is and why it is so important.\u003c/p\u003e\n\u003ch2 id=\"strategy-pattern-defined\"\u003eStrategy Pattern Defined\u003c/h2\u003e\n\u003cp\u003eThe idea behind the Strategy Pattern is as follows:\u003c/p\u003e\n\u003cp\u003eImagine that you have \u003cem\u003eSomeClass\u003c/em\u003e that needs to implement varying behaviour depending on a situation. You will implement it via a composition, creating a \u003cem\u003eStrategy\u003c/em\u003e interface that would encapsulate this varying behaviour. The interface will have only one method (ie. \u003cem\u003eexecute\u003c/em\u003e) that runs when the strategy is used. It will look something like this:\u003c/p\u003e","title":"My favourite Design Pattern - Strategy"},{"content":"It seems that everyone wants to work remotely these days. Why wouldn’t they? There are some obvious benefits, such as reduced commute and working from the comfort of your own home. What is the impact on the team though? In this blog post, I focus on the impact on the team rather than individual benefits.\nWorking from home – developers’ perspective It is no secret that developers really enjoy working remotely. The key reasons seem to be:\nCutting out the commute Easier to achieve a good work-life balance Fewer distractions Everyone like their home It is also widely accepted that working from home, developers are often more productive than in the office environment.\nWith all that in mind, we have a clear case- working from home benefits individual developers. Does it benefit the team though?\nThe potential problems with homeworking If your company is set up so that everyone works remotely, these may not apply. In most cases, the majority of the team members on any given day would still be working in the office. Let’s see what are the potential problems:\nIt can be difficult to contact a home worker when something is urgently needed There is a possibility of exclusion and misunderstanding when the team agrees on something with consulting the home worker It is more difficult to carry out activities such as pair programming or collaborative design sessions Some problems raised in sprint retrospectives (or other personal challenges) are difficult to handle remotely Does that mean that working from home is simply bad for the team? Do these concerns trumps individual benefits? Absolutely not! It is important to recognize the potential problems to proactively avoid them.\nHow to make homeworking work for your team We will take the issues raised and tackle them one by one.\n**It can be difficult to contact a home worker when something is urgently needed –**You need to make sure that everyone is readily contactable in case of work “emergency”. If instant messaging is not good enough, perhaps a mobile number for everyone (this could be a separate phone) is available. Some people may be “excluded” from the team – If most people are working from the office, there should be a minimum amount of “face time” for everyone to at least get to know the team. You don’t want “anonymous” home-working people that no one talks to. Decisions may be taken without consulting home workers – For major decisions, it would be good to include people working from home either via a chat system like Slack, audio-conference or have everyone present in the office for the discussion. Having these “get together” days is a good idea either way. Decisions may be taken without telling the home workers – If some decision does not require everyone involvement, yet it impacts everyone, there should be a “remote friendly” way of communicating it. Team wiki pages, Slack, etc. There are multiple ways to tackle this problem, it is important that everyone knows what is being used and follows that. Pair programming and collaborative design sessions – In reality, with merge request reviews and modern chat tools, this is usually not as bad as it sounds. Still, a major collaborative design is easiest done in person in front of a white-board (personal opinion). In that case, I recommend setting these up in advance so that interested parties can be present. Some problems are difficult to handle remotely – Things such as team retrospectives, or major issues/changes are often communicated and agreed easier in person. I think the best solution is to have everyone in the office for the end of sprint activities (if possible and if you are following Scrum). Anything else – Your team may face different challenges. Where there is a will, there is a way. With modern tech, there is really no problem that can’t be solved by a group of smart people getting together and coming up with solutions. As you can see there are multiple challenges to home working, but there are just as many solutions. I think the most important advice I can give would be to make sure that everyone knows and agrees to a set of rules for working from home. I would fo as far as recommending to write them down and have them easily available for everyone.\nAs new challenges come up, the rules can always be changed or expanded. If you are following an agile methodology, this should feel rather natural.\nSummary Working from home has amazing benefits for individual developers. However, it is often seen as a challenge from a team perspective. Based on my experience, this challenge often seems greater than it really is. As long as everyone is on the same page about how it will work- the individual benefits can be reaped without affecting the team.\nShould you let your developers work from home? Absolutely! Just make sure that you all agree on which rules to follow!\n","permalink":"https://e4developer.com/posts/should-you-let-your-developers-work-from-home/","summary":"\u003cp\u003eIt seems that everyone wants to work remotely these days. Why wouldn’t they? There are some obvious benefits, such as reduced commute and working from the comfort of your own home. What is the impact on the team though? In this blog post, I focus on the impact on the team rather than individual benefits.\u003c/p\u003e\n\u003ch2 id=\"working-from-home--developers-perspective\"\u003eWorking from home – developers’ perspective\u003c/h2\u003e\n\u003cp\u003eIt is no secret that developers really enjoy working remotely. The key reasons seem to be:\u003c/p\u003e","title":"Should you let your developers work from home?"},{"content":"Single Responsibility Principle, as defined in the very famous set of SOLID principles, is often misunderstood. When asked what it means, most developers go with- “a class should do only one thing”, or something along these lines. This is simplistic and frankly- wrong! Intrigued? Read on!\nSingle Responsibility Principle – the real definition Single Responsibility Principle (SRP), as defined by Robert C. Martin states:\n“A class should have only one reason to change.”\nRobert C. Martin\nThis is very different from the “a class should do only one thing” version.\nIn one of his later books (“Clean Architecture”, reviewed here) Robert C. Martin goes even further, clarifying his intent behind this principle:\n“A module should be responsible to one, and only one, actor.”\nRobert C. Martin – The final version of the SRP from “Clean Architecture”\nThis is even more precise. Before we go deeper into these definitions let’s look once again why “a class should do only one thing” is a weird idea…\nA class should do more than one thing! Ok, so what does that even mean that class should do only one thing? Does that mean that we are only allowed one public method? That there is only one piece of business logic allowed? It is hard to think how this logic applies to classes and OOP.\nMy guess that this is somehow the principle behind writing good functions being misunderstood and extended to the OOP and SOLID principles.\nIt is good if each function has a specific goal, and if there is too much logic carried out by a single function, you can then refactor into multiple more specialised functions.\nThat makes sense on a function level, it does not make sense on a class level.\n“A module should be responsible to one, and only one, actor” Let’s look at the correct formulation of the Single Responsibility Principle and see what it really means.\nFirst of all, the SRP talks about a module. Uncle Bob clarifies that by that he means a source file. After all, the principle can apply to more than simply classes.\nWhat about the actor that the principle is talking about? It can mean:\nUser Stakeholder A group of users or stakeholders that are requiring the system to change in the same way After all, if your “User” is let’s say, an “account manager” there could be more users fulfilling that role. It is good to think of that “actor” as a specific type of user in the system. It does not even have to be an actual person.\nAn example of the Single Responsibility Principle Let’s look at a simple example of the Principle:\nImagine you are implementing a Bookstore application You have a class called Book The Book has a method called setStockLevel() – an inventory manager is an actor here The Book has a method called calculatePrice() – the salesman will be interested The Book has a method called getDetails() – the website presentation engine is an actor here As you can see you have three different groups of actors that may need changes to the same Class.\nWhat are the problems with that?\nThere is a risk potential duplication of code if we need to start saving other inventory. This should be separated. The same issue appears when pricing algorithm is being developed further or generalised. Multiple groups of developers will start having merge issues and completely unrelated code live in the Book class. What is the solution?\nKeep the Book Class focused. Focus on the methods that are related to the presentation on the website. Move the pricing logic into a pricing module that will possibly use the Book class. Do the same with the inventory. There could be many things that the Book class is doing:\nGetting the details of the book Getting the picture Logging access to the Class for debugging etc. But it will all be oriented around the presentation of the Book to the customer (via a website in that case).\nWhy Single Responsibility Principle? There are many benefits from following the SRP:\nMaking code changes easier The code is more readable Easier to reason about the system Fewer reasons to change multiple files, changes become more focused Improved encapsulation and cohesion (this is re-phrasing the above partially) Together with other SOLID principles, it helps in achieving the Clean Architecture Summary Single Responsibility Principle stated as “A module should be responsible to one, and only one, actor” is more nuanced, and ultimately more useful than many developers expect. If you already knew the real SRP, spread the word to those who don’t- if you learned it from my blog- I am happy that I could help!\n","permalink":"https://e4developer.com/posts/single-responsibility-principle-do-you-know-the-real-one/","summary":"\u003cp\u003eSingle Responsibility Principle, as defined in the very famous set of SOLID principles, is often misunderstood. When asked what it means, most developers go with- “a class should do only one thing”, or something along these lines. This is simplistic and frankly- wrong! Intrigued? Read on!\u003c/p\u003e\n\u003ch2 id=\"single-responsibility-principle--the-real-definition\"\u003eSingle Responsibility Principle – the real definition\u003c/h2\u003e\n\u003cp\u003eSingle Responsibility Principle (SRP), as defined by Robert C. Martin states:\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003e “A class should have only one reason to change.”\u003c/p\u003e","title":"Single Responsibility Principle - do you know the real one?"},{"content":"Today I want to share with you a review of the most exciting book I have listened to so far –“Algorithms to Live By: The Computer Science of Human Decisions” by Brian Christian and Tom Griffiths*.* I have already mentioned it on this blog when simulating the secretary problem. This is just an example of many fascinating problems this book talks about. Continue reading to find out why it makes such an amazing listening experience.\nWhat makes a good AudioBook for developers You may be surprised by the title of this article. After all, what is a “Developer Audiobook”? I love listening to audiobooks, I consider them my secret weapon. Most audiobooks are not really about code, frameworks or algorithms. They talk about all the other important things. I was looking for something more… computationally stimulating?\nAnyway, I was wondering if there are any books that talk about algorithms, mathematics, but do it in such a way that you can listen to that rather than read and follow pseudocode. That search led me to “Algorithms to Live By”.\nSo what makes a good “Developers Audiobook” in my opinion?\nIt talks about non-trivial technological problems It is engaging to listen to Some of its content can be applicable to my (developer’s) work Most developers that I know would enjoy it And here is a question for you- if you know other similar books, please make sure to let me know either in comments or on the e4developer twitter account.\nAlgorithms to Live By – What is it about? This is a book that looks at different life situations and how different algorithms apply to them. Some of the problems tackled are:\nWhen to accept an offer on a house? How to effectively find the best flat? Should you sort your bookshelf? Should you sort your emails? What are the world biggest libraries doing when it comes to sorting their books? Which parking space to choose? If you don’t know payouts of multiple “one-armed bandits” machines, what are some optimal strategies of playing? Why do we forget so much when we are older? …and of course- How to hire secretaries? Granted, many of these problems are really mostly mathematical models, that are far removed from realities… or so may appear at first! The book makes a decent effort at taking these “cleaner” version of problems and seeing how they apply to the reality.\nThe discussion of sorting and what is the goal of it (making future retrieval faster) made me re-think some of the things I have been doing. Sometimes we tend to just act without thinking… is this really necessary?\nThis is where the book really succeeds and leaves a lasting impression. You start to see these algorithms play out in real life and you may start questioning yourself- am I really rational here?\nFascinating problems around us After I finished listening to the book I felt like this is just the beginning. Thanks to the book and the new perspective, I was very curious to start testing things and see if I can understand human behaviour deeper.\nI wrote that article on simulating the secretary problem (which may give you a good taste of what the book is like) as the first experiment inspired by the book. I promised you it won’t be the last you will see on this blog.\nThe fact that we are developers gives us sort of special power. Many of the problems from the book are very challenging mathematically to prove and reason about… They are also fairly easy to simulate and observe different solutions empirically. As developers, we have the power that many mathematicians who originally looked at these issues would be sure to envy us.\nSummary *“Algorithms to Live By: The Computer Science of Human Decisions”*made a lasting impression on me. It is the kind of book that fascinates, entertains and changes your perspective on things. I give it my highest recommendation. Enjoy reading or listening!\nPS. I enjoyed listening to this book so much that I bought a physical copy for reference!\n","permalink":"https://e4developer.com/posts/algorithms-to-live-by-my-favourite-developer-audiobook/","summary":"\u003cp\u003eToday I want to share with you a review of the most exciting book I have listened to so far –\u003cem\u003e“Algorithms to Live By: The Computer Science of Human Decisions”\u003c/em\u003e by Brian Christian and Tom Griffiths*.* I have already mentioned it on this blog \u003ca href=\"https://e4developer.com/posts/simulating-the-secretary-problem-with-java/\"\u003ewhen simulating the secretary problem\u003c/a\u003e. This is just an example of many fascinating problems this book talks about. Continue reading to find out why it makes such an amazing listening experience.\u003c/p\u003e","title":"“Algorithms to Live By” - My Favourite Developer Audiobook"},{"content":"There are numerous articles out there talking about agile teams and how being agile will change your life/project. I agree with what Agile Manifesto proposes, but overall, I think that agile movement lacks scientific approach. In this article, I will apply my understanding of physics to “prove” and explain some of the agile phenomena.\nBefore you get angry and start telling me that I am abusing here either physics and agile- take a deep breath and remember:\nIf I had no sense of humor, I would long ago have committed suicide.\nMahatma Gandhi\nOkay, wow, that one landed a bit heavy. All I am trying to say here, not everything I write is 100% serious.\nWithout further disclaimers, let’s look at different laws of physics and how they apply to agile!\nThe law of conservation of energy Also known as the first law of the thermodynamics:\nThe law of conservation of energy states that the total energy of an isolated system is constant; energy can be transformed from one form to another, but can be neither created nor destroyed.\nWikipedia\nI think this is fairly obvious. Your team has a limited amount of energy. You can spend it in a few ways:\nCreating software features that users want Creating software features that nobody wants Creating bugs Useful meetings Pointless meetings Wasting time on the Intenet (although there is an argument to be made that this is a way of bringing energy into the system) As you can see, the first law of thermodynamics clearly states that you need to direct your efforts wisely, as you have limited energy. Quotes such as:\n“Working software over comprehensive documentation”\nhttp://agilemanifesto.org/\nshow that Agile methodology is aware of this limited energy in the system.\nLet’s see what other laws of physics work perfectly (I am really annoying physicists here) in the agile context…\nNewton’s Third Law of Motion Law III: To every action there is always opposed an equal reaction: or the mutual actions of two bodies upon each other are always equal, and directed to contrary parts.\nIsaac Newton\nPrepare for some profound comparison and bending physics law in such a non-physics context that it makes me uncomfortable.\nI like to think of this as the “The Law of the effort of introducing Agile methods”. I would rephrase it as follows:\nThe action required to introduce the agile is proportional to the difference between the team’s working practices and Agile practices. The resistance from the team is proportional to the action required.\nBartosz Jedrzejewski\nThe less agile the team is, the more you need to push. The more you push, the more resistance you will encounter. Get ready. Don’t expect an easy ride- it’s proven by physics after all…\nUncertainty principle Why not? After all, quantum physics applies to a very small (atomic and subatomic scales) and to Agile software development teams… I am sure I have seen it in a paper somewhere!\nThe uncertainty principle is often misunderstood and misrepresented. In the common language the spirit of the law can be represented as:\nIn quantum physics you can’t know the exact speed and position of a particle. The more you know one, the less you know the other. There is a limit to that precision. This is a fundamental law of nature and nothing to do with the instrument used to observe that particle.\nBartosz Jedrzejewski explanation of the Uncertainty Principle\nIf you want to go deeper on the actual law, Wikipedia is your friend.\nOk, with that out of the way, how does it relate to Agile?\nWe need to translate the concepts a little bit, as physical speed and physical position of the Agile team usually does not matter… Unless you are outsourcing your development, but that’s a topic for another blog post!\nWe will translate the concepts as follows:\nPhysical location = The amount of work left to do in the project Physical speed = The real speed at which the development can happen The Agile uncertainty principle states that:\nIf you define the whole project upfront, you will know nothing about the true speed of delivery. If you focus on maximum delivery speed, you will not know the scope of what will be delivered in the end.\nBartosz Jedrzejewski and his gross misrepresentation of Uncertainty Principle in the light of Agile development\nTurn our that the uncertainty principle is the Waterfall vs Agile argument scientifically presented. I bet that Heisenberg did not expect that!\nThe wave theory of light (and Agile) If you think that Quantum Physics can’t possibly have more to do with Agile, you would be wrong.\nI am talking here specifically about the phenomena shown in the Double-slit experiment. You know, the experiment that shown that electron can literally be in two places at the same time.\nOkay, this one is a bit physics heavy, but hear me out. I am sure you have seen a similar conversation:\nPerson 1: Is it done? Person 2: Yes! Person 1: Is it done done? Person 2: No! Person 1: So is it done then? Person 2: Not done done, but done… You might have been arguing here that the problem is the existence of different levels of done. In reality, there could be two separate reports produced for two separate audiences, one treating the item as done, the other not. I wonder if they also create interference patterns?…\nAs you can see, it is common in Agile for multiple realities to exist at the same time and different things have different (contradicting) state in these realities… That brings me to the last, but possibly my favourite physics-agile parallel.\nThe many-worlds interpretation is an interpretation of quantum mechanics (and Agile) This is an interpretation of quantum mechanics that states something along the lines:\n…all possible alternate histories and futures are real, each representing an actual “world” (or “universe”).\nWikipedia\nThis is not completely insane, as this is just one interpretation of what wavefunction collapse is. Of course, it also translates to Agile!\n…Maybe not quite an infinite number of parallel universes, but certainly a vast number of universes existing all at once:\nDevelopment team feeling that the project is successful Security claiming that everyone is doomed Project management unhappy, claiming no work is being done Company directors happy with how agile the company became And testers complaining about the waterfall approach getting even worse I am really just scratching the surface here. It is possible that the whole event plays out and everyone remembers is differently- as a success, failure, agile, not-agile etc.\nHere I may break away from these physics rants and actually end with a super lofty quote from Crucial Conversations (good book):\n“The pool of shared meaning is the birthplace of synergy”\nKerry Patterson, Crucial Conversations\nI don’t believe the many-worlds interpretation of quantum mechanics (Sheldon Cooper would be disappointed with me here). I also don’t believe it has a place in Agile development.\nIf you see people starting to believe it and happily live within their bubbles, try to bring some common understanding and maybe together you can go back at working towards the same goal.\nI recommend the Phoenix Project as a good book that explores this journey towards common understanding and working together.\nSummary Maybe Agile is not exactly like physics. This is not the point here. By these semi-serious comparisons, I wanted to bring your attention to common problems with Agile development and building better software. Laws of physics can’t be changed, the way people work- can.\nBonus “physics” points if you recognise the main photo!\n","permalink":"https://e4developer.com/posts/the-physics-of-the-agile-methodology/","summary":"\u003cp\u003eThere are numerous articles out there talking about agile teams and how being agile will change your life/project. I agree with what \u003ca href=\"http://agilemanifesto.org/\"\u003eAgile Manifesto\u003c/a\u003e proposes, but overall, I think that agile movement lacks scientific approach. In this article, I will apply my understanding of physics to \u003cem\u003e“prove”\u003c/em\u003e and explain some of the \u003cem\u003eagile phenomena\u003c/em\u003e.\u003c/p\u003e\n\u003cp\u003eBefore you get angry and start telling me that I am abusing here either physics and agile- take a deep breath and remember:\u003c/p\u003e","title":"The Physics of the Agile Methodology"},{"content":"Spring Boot is enjoying, a seemingly never-ending growth of popularity. While only released in 2014, it has managed to overtake the Java serverside in less than five years. When starting a new project, a sensible question to ask is- “should I use a Spring Boot?”. In this article, I will help you answer this question!\nEvery project is different, but we can use some characteristics with which we can compare them. Based on these characteristics I will advise you if Spring Boot is generally a good idea.\nAre you working on Microservices Architecture? One of the main selling points of Spring Boot is its use for microservices architectures. I will agree here- I have used Spring Boot to implement the microservices architecture in production for large enterprise and it works well.\nWhat I would point out though, is that many companies keen on using Spring Boot, sort of miss the trick by not looking at Spring Cloud. I previously wrote about Spring Cloud as a Blueprint for Microservices Architecture – I really think this is a good way to think about it. If you are already thinking about using Spring Boot and implementing microservices – check out Spring Cloud and what it has to offer.\nIs Spring Boot a good choice for microservices architecture? Definitely yes!\nIs your project using Kotlin? Kotlin seems to be getting very popular, very quickly. Since about mid-2017 there seem to be a major interest in the language and developers are keen to use it on the server side.\nIf you are a Kotlin enthusiast considering Spring Boot, I have some great news for you. Spring Boot 2.0 is built on top of Spring 5 which brings much better support for Kotlin. You can read about it in the article published on the official Spring website- Introducing Kotlin support in Spring Framework 5.0.\nIs Spring Boot a good choice when working with Kotlin? Definitely yes!\nAre you going to use Serverless Architecture? Another trend that is gaining in popularity is the Serverless Architecture. With AWS Lambda and Azure Functions, it is getting easier and easier at running your system… “without servers”. I put that term in quotes as there seem to be some arguments about what that means. I let you be the judge.\nWith some gymnastics, you can run your Spring Boot serverless, but should you? I would argue that this is not the best use of either Spring Boot or Serverless Architectures.\nWhat should you use instead? If you like what the guys behind Spring are doing, you should check out Project Riff. It is still in its early stages, but rather interesting.\nOn an off-chance for sounding heretical on my own blog- maybe even consider another language? Java works serverless, but I am not convinced that using the JVM is the best approach. If you disagree, let me know in the comments.\nSo… Is Spring Boot a good choice for Serverless Architectures? I don’t think so.\nAre your developers new to Spring? Spring is a large ecosystem and it can feel daunting having to learn it from scratch. If your team has never used it before, you may be wondering if it is the right choice?\nI had a pleasure working on Spring Boot project with a number of developers who had no experience of Spring at all. I found that Spring Boot has a rather nice learning curve. You can get the basics very quickly and autoconfiguration guides you as you learn the framework.\nI actually found Spring Boot to be one of the most beginner friendly serverside frameworks out there. Perhaps this is one of the reasons for its wild popularity?\nIf you looking for a good place to start learning Spring Boot I can recommend Pluralsight for its courses (I wrote an article about learning with Pluralsight and I am an affiliate) and the amazing collection of official Spring guides.\nIs Spring Boot a viable choice for teams with no Spring experience? Definitely yes!\nIs your codebase expected to be very Simple? What if you don’t think you need all the features that Spring Boot offers. You are maybe not interested in the dependency injection and the wonders of autoconfiguration. You just want to write some simple REST APIs.\nHere you have a choice- Spring Boot is still great for basic REST APIs, but you could give a chance to microframeworks like Javalin and Spark Java.\nI have elaborated much more on that point in The Quest for Simplicity in Java Microservices.\nIs Spring Boot viable choice for simple REST APIs? Definitely yes, but also check out microframeworks!\nSummary It seems that Spring Boot is a great choice for most modern serverside development. Is that really a surprise? Spring Boot is incredibly popular for a reason!\nBefore you pick Spring Boot though, make sure that you are not falling for the “If all you have is a hammer, everything looks like a nail” trap and use the right tool for the job. Especially when going serverless or trying to write something “simple”.\n","permalink":"https://e4developer.com/posts/should-you-use-spring-boot-in-your-project/","summary":"\u003cp\u003eSpring Boot is enjoying, a seemingly never-ending growth of popularity. While only released in 2014, it has managed to overtake the Java serverside in less than five years. When starting a new project, a sensible question to ask is- \u003cem\u003e“should I use a Spring Boot?”\u003c/em\u003e. In this article, I will help you answer this question!\u003c/p\u003e\n\u003cp\u003eEvery project is different, but we can use some characteristics with which we can compare them. Based on these characteristics I will advise you if Spring Boot is generally a good idea.\u003c/p\u003e","title":"Should you use Spring Boot in your project?"},{"content":"If you want to write a program that is able to play a strategy game, there are good chances that you will be looking at a Minimax algorithm. This is especially true when it comes to games like chess, where variations of the Minimax algorithm are what is used to build the strongest chess-playing programs in existence. In this article, I will look at implementing the basic version of the Minimax algorithm with Java.\nMinimax Algorithm – a quick introduction Minimax is a simple algorithm that tells you which move to play in a game. A detailed explanation is available on Wikipedia, but here is my quick, less rigorous outline:\nTake a game where you and your opponent take alternate turns Each time you take a turn you choose the best possible move (max) Each time your opponent takes a turn, the worst move for you is chosen (min), as it benefits your opponent the most Looking forward and using these assumptions- which moves leads you to victory? Minimax is basically doing what I described above, but with a simple algorithm. In this article, I will implement the most basic version of Minimax where I omit the two possible improvements:\nI will look forward through the entire game tree, finding the optimal strategy (impossible in more complicated games). I will not use any pruning of branches (inefficient in practical, non-trivial scenarios). Stopping rules and pruning can be added in the future when I use the algorithm to play chess or work with another non-trivial game.\nImplementing Minimax with Java Based on the pseudocode in “Artifical Intelligence: The Modern Approach” (Amazon), I have decided to implement the template for the algorithm. Written in Java, it can look like this:\npublic final class MinimaxTemplate { private MinimaxTemplate() {} public static State minimaxDecision(State state) { return state.getActions().stream() .max(Comparator.comparing(MinimaxTemplate::minValue)).get(); } private static double maxValue(State state) { if(state.isTerminal()){ return state.getUtility(); } return state.getActions().stream() .map(MinimaxTemplate::minValue) .max(Comparator.comparing(Double::valueOf)).get(); } private static double minValue(State state) { if(state.isTerminal()){ return state.getUtility(); } return state.getActions().stream() .map(MinimaxTemplate::maxValue) .min(Comparator.comparing(Double::valueOf)).get(); } //State Class } You can call minimaxDecisionwith a State object representing the current state of the game.\nTo get a usable implementation of Minimax you need to implement this State object template:\npublic static class State { public State(){ //create a state } Collection\u0026lt;State\u0026gt; getActions(){ List\u0026lt;State\u0026gt; actions = new LinkedList\u0026lt;\u0026gt;(); //generate actions return actions; } boolean isTerminal() { //add some logic return false; } double getUtility() { //add some logic return 0; } } Solving a simple game with Minimax While reading different articles on the subject I foundIntroduction to Minimax Algorithm by Baeldung which had a simple game described. This game inspired me to create something very similar:\nYou start with the number Two players can reduce this number by the value 3, 4 or 5 The number can’t go down below 0 A player that can’t make a move loses The Baeldung article uses Nodes explicitly while I just fill in my template coming up with the Class:\nimport java.util.*; public final class Minimax { private Minimax() {} public static State minimaxDecision(State state) { return state.getActions().stream() .max(Comparator.comparing(Minimax::minValue)).get(); } private static double maxValue(State state) { if(state.isTerminal()){ return state.getUtility(); } return state.getActions().stream() .map(Minimax::minValue) .max(Comparator.comparing(Double::valueOf)).get(); } private static double minValue(State state) { if(state.isTerminal()){ return state.getUtility(); } return state.getActions().stream() .map(Minimax::maxValue) .min(Comparator.comparing(Double::valueOf)).get(); } public static class State { final int state; final boolean firstPlayer; final boolean secondPlayer; public State(int state, boolean firstPlayer){ this.state = state; this.firstPlayer = firstPlayer; this.secondPlayer = !firstPlayer; } Collection\u0026lt;State\u0026gt; getActions(){ List\u0026lt;State\u0026gt; actions = new LinkedList\u0026lt;\u0026gt;(); if(state \u0026gt; 4){ actions.add(new State(state-5, secondPlayer)); } if(state \u0026gt; 3){ actions.add(new State(state-4, secondPlayer)); } if(state \u0026gt; 2){ actions.add(new State(state-3, secondPlayer)); } return actions; } boolean isTerminal() { return state \u0026lt; 3; } double getUtility() { if(firstPlayer) return -1; else return 1; } } } That can be run by a Main class as follows:\npublic class Main { public static void main(String[] args){ System.out.println(\u0026#34;Welcome to my minimax algorithm\u0026#34;); boolean end = false; int val = 21; boolean first = true; while(!end) { System.out.println(\u0026#34;Current position = \u0026#34;+ val +\u0026#34;, Player one: \u0026#34; + first); Minimax.State s = new Minimax.State(val, true); Minimax.State decision = Minimax.minimaxDecision(s); val = decision.state; if(decision.isTerminal()){ end = true; System.out.println(\u0026#34;Current position = \u0026#34;+ val +\u0026#34;, Player one won: \u0026#34; + first); System.out.println(\u0026#34;Game over\u0026#34;); } first =! first; } } } This algorithm perfectly solves this simple game.\nWhat next? It is fun to use Minimax to solve simple games. In order to tackle something more serious there are three key improvements to be made:\nWe should be able to evaluate game state before win-lose is decided. For example in chess- having more pieces usually means advantage etc. We need to be able to search only to a predetermined depth (this should be easy already with the template given). I need to look at Alpha-Beta Pruning algorithm to improve the performance Having an object representing the state is potentially slow, but this can be looked at last. Summary A future part of this article is pretty much guaranteed as I still have quite a way to go on my quest for writing a semi-advanced game playing program.\nI hope this was interesting and made you want to try to play with Minimax yourself. Good luck!\n","permalink":"https://e4developer.com/posts/implementing-minimax-algorithm-in-java/","summary":"\u003cp\u003eIf you want to write a program that is able to play a strategy game, there are good chances that you will be looking at a Minimax algorithm. This is especially true when it comes to games like chess, where variations of the Minimax algorithm are what is used to build the strongest chess-playing programs in existence. In this article, I will look at implementing the basic version of the Minimax algorithm with Java.\u003c/p\u003e","title":"Implementing Minimax Algorithm in Java"},{"content":"In this blog post, I want to take you back to basics and talk about Java for loops. To be honest, I am writing this blog post partially for myself, as this is something that I am myself too often guilty of. Since Java 8, we don’t have to write so many for loops in Java! I hope this blog post will make your code nicer to read and quicker to write.\nWhat do you need a for loop for? Broadly speaking there are two categories of tasks performed by for loops:\nIterating over collections Running algorithms For algorithms, a for loop may be appropriate. Have a look at this algorithm checking if a number is a power of three:\ndouble number = 81; for(; number \u0026gt; 1; number /=3); return number == 1; For loop is an appropriate construct here. This is a very simple example and as you can imagine, things can get much trickier with more difficult algorithms.\nFor most developers, in their day to day work, this is a minority of cases. Most of the time, we use for loops to iterate over collections. Let’s look at some examples of that code.\nIterating over collections in Java Let’s take a List that contains some values.\nList\u0026lt;String\u0026gt; heroes = new ArrayList\u0026lt;\u0026gt;(); heroes.add(\u0026#34;SuperPerson\u0026#34;); heroes.add(\u0026#34;WonderGirl\u0026#34;); heroes.add(\u0026#34;LemurMan\u0026#34;); heroes.add(\u0026#34;TimesTenDeveloper\u0026#34;); heroes.add(\u0026#34;PandaFace\u0026#34;); heroes.add(\u0026#34;CobraKid\u0026#34;); heroes.add(\u0026#34;TShapedTeamMember\u0026#34;); There are many ways to iterate over it. Let’s start with the rather archaic Iterator approach:\nIterator\u0026lt;String\u0026gt; heroesIterator = heroes.iterator(); while (heroesIterator.hasNext()) { System.out.println(heroesIterator.next()); } That looks really heavy weight. This kind of code gives Java its somewhat deserved reputation for verbosity.\nAnother try, this time with a classical indexed for loops:\nfor(int i = 0; i \u0026lt; heroes.size(); i++){ System.out.println(heroes.get(i)); } Well, this is a bit simpler to follow, but since Java 5 we have for each loop at our disposal:\nfor(String hero : heroes){ System.out.println(hero); } This is where the most developers get stuck. This construct is so familiar and easy to follow, that most of us don’t bother to think about anything better. Java 8 is been available for a while though…\nWith Java 8 we can use a forEach function, making it very obvious what we are doing:\nheroes.forEach(hero -\u0026gt; System.out.println(hero)); We can simplify it even further:\nheroes.forEach(System.out::println); I really like this, as it is very obvious that we are not running an algorithm with possibly a dynamic number of steps- we are just iterating over the elements of an array.\nTo be honest, I wish Java would allow us to also pass an index more easily with that style. Unfortunately, at the moment this is not possible:\n//not legal Java!!! heroes.forEach((hero , i) -\u0026gt; System.out.println(hero +\u0026#34; at \u0026#34;+i)); And if you want to keep using that style while accessing the index, you may need to resolve to less pretty:\nIntStream.range(0, heroes.size()) .forEach(i -\u0026gt; System.out.println(heroes.get(i) +\u0026#34; at \u0026#34;+i)); Where to go next? Use Java Streams Once you stop writing so many for loops in Java and forEach becomes a second nature, you should look at Streams in Java.\nWith a similar syntax, you can, for example, easily choose all heroes beginning with a letter ‘T’:\nheroes.stream().filter(hero -\u0026gt; hero.startsWith(\u0026#34;T\u0026#34;)) .forEach(System.out::println); This gives you the famous “TimesTenDeveloper” and the “TShapedTeamMember”.\nSummary Stop writing so many for loops. Once you do, the Java 8 Streams will come as a natural step and your code will be easier to read and faster to write. What is not too like? Good luck!\n","permalink":"https://e4developer.com/posts/please-stop-writing-so-many-for-loops-in-java/","summary":"\u003cp\u003eIn this blog post, I want to take you back to basics and talk about Java \u003cem\u003efor loops\u003c/em\u003e. To be honest, I am writing this blog post partially for myself, as this is something that I am myself too often guilty of. Since Java 8, we don’t have to write so many \u003cem\u003efor loops\u003c/em\u003e in Java! I hope this blog post will make your code nicer to read and quicker to write.\u003c/p\u003e","title":"Please, stop writing so many “for loops” in Java!"},{"content":"I have blogged about my road to productivity recently. I received a comment there from one of my readers asking me if I have a secret weapon that helped me be so productive. While I don’t have a secret weapon strictly for productivity, I realised that I have a secret weapon (or two) for learning. That weapon is simply… Audiobooks!\nWhy Audiobooks? Why would you get an audiobook? What is so magical about them, when you can simply read a book (a paper version) or an eBook (on your Kindle or something similar?\nIt is all about the freedom they give you! I listen to audiobooks:\nDuring my commute. Living in London, you often have to stand on the train, change platforms and have all sort of situations which will make reading more difficult. When I run or do exercise. It makes running less boring and the time goes by much quicker! (I also really recommend running- it is fun once you get into it!). When I walk somewhere. You can listen to an Audiobook instead of music. This is basically a time where I would before be listening to music, or just enjoying the silence (or the noise of a commuter train!). That means, that I get that time for free. I don’t have to sacrifice anything to listen to books! How great is that? You get to keep all your free time, but you somehow learn all the things you can from many amazing books?\nWhere to get Audiobooks? I am using Audible, as it works for me. It is a subscription service by Amazon that gives you one book a month for a specified subscription fee. You can also buy Audiobooks on their own, but they are a bit pricey…\nThe good news is- there are many other places you can get Audiobooks. Scribd is quite popular, although I haven’t used it much. I am sure that if you google around you will find even more.\nWhat Audiobooks are good for developers? Ok, so you like the idea of listening to Audiobooks on the go, but where do you get started? What works well? I will share with you a shortlist of my so far favorite developer oriented Audiobooks:\n“Algorithms to Live By: The Computer Science of Human Decisions” (Brian Christian, Tom Griffiths) – Absolutely blew my mind! An amazing book about algorithms and everyday life. I can’t recommend it highly enough if you want to learn about algorithms that influence everyday’s life decisions. “Hello World: How to Be Human in the Age of the Machine” (Hannah Fry) – This one was released in September 2018, so pretty much fresh off the press! Once again, about algorithms, with a focus on the algorithms impact on the modern world. Very interesting and highly recommended. “The 7 Habits of Highly Effective People: Powerful Lessons in Personal Change” (Stephen R. Covey) – It is a book about becoming more effective at life so to speak. Hard to really give it justice in a paragraph. I highly recommend it, even though it does not have a developer focus. “The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses” (Eric Ries) – This is an absolute classic. If you are thinking about starting a software-oriented venture, this should be your mandatory reading. Really eye opening and full of practical advice. Easy to listen to! “Ready Player One” (Ernest Cline) – Ok, let’s be honest here- you don’t have to only listen to software related material and books about working on yourself. Sometimes a fascinating novel makes for a good break between the more taxing lecture. I could go on here, there are really plenty of books that are great to listen to on the go… Not everything makes for a good audiobook though!\nIf you want to read more code-oriented material such as “Clean Architecture” (my review), “Effective Java” (my review) or “Cracking the Coding Interview” (my review), Audiobook will not work well (or may not even be available).\nSo is there anything that you can do for learning programming on the go? Without a paper book or a laptop?\nBeyond Audiobooks – learning programming on the go Of course, there is! The answer is with online courses. I use Pluralsight (free trial) on the go and I am currently using it to refresh my JavaScrip knowledge.\nI have written previously about learning programming with Pluralsight and I really recommend it! This is my second secret productivity weapon.\nSummary Audiobooks are great for busy developers, who have to spend many hours every week commuting, walking or even (more positively) exercising! If there are many books that you always wanted to read, but you never had time… now you have the solution! Enjoy your listening!\n","permalink":"https://e4developer.com/posts/audiobooks-a-secret-weapon-of-a-busy-software-developer/","summary":"\u003cp\u003eI have blogged about \u003ca href=\"https://e4developer.com/posts/my-road-to-productivity-start-finishing-and-producing/\"\u003emy road to productivity recently\u003c/a\u003e. I received a comment there from one of my readers asking me if I have a secret weapon that helped me be so productive. While I don’t have a secret weapon strictly for productivity, I realised that I have a secret weapon (or two) for learning. That weapon is simply… Audiobooks!\u003c/p\u003e\n\u003ch2 id=\"why-audiobooks\"\u003eWhy Audiobooks?\u003c/h2\u003e\n\u003cp\u003eWhy would you get an audiobook? What is so magical about them, when you can simply read a book (a paper version) or an eBook (on your Kindle or something similar?\u003c/p\u003e","title":"Audiobooks - a secret weapon of a busy software developer"},{"content":"When thinking about microservices, we mostly imagine autonomous teams working on independent services. Despite all that independence, things such as log aggregation and security benefit from a system level thinking. In this article, I will discuss these concerns and give my advice on how to approach them.\nThere are different kinds of microservices systems. Some of them are truly independent when every microservice is nearly indistinguishable from a third-party API, other- because of either necessity or practicality, rely on shared infrastructure or concepts.\nWhen looking at these cross-cutting concerns, you should always keep in mind what kind of system are you working on. One thing may be a good choice in organisation A and a terrible mistake in organisation B. Advice given here does not give you the mandate to stop being critical!\nWith this warning out of the way, let’s look at the common cross-cutting concerns in microservices architectures.\nAuthentication / Authorization The first thing worth thinking about is the approach to the authorization and authentication within the system. Some of the questions worth asking are:\nDoes every service need to be secured? How to verify if a request is authentic- do we just limit the possible callers and assume that they have a right to execute anything? If we are going with a shared authorization scheme, what roles are we going to use int he system? Regardless of how you look at the problem, it makes little sense to make separate decisions, to answer these questions differently for each service.\nLog Aggregation and Distributed Tracing Another thing that benefits from a system level thinking and consolidated strategy is dealing with logs. Having the ability to see a request journey through multiple different microservices is an extremely useful debugging tool.\nI wrote a whole article about Tracing messages in Choreography with Sleuth and Zipkin if you would like to know more about this topic.\nIn general, a tool like ELK stack (Elasticsearch, Logstash, Kibana) is invaluable once your microservices move from development into production.\nConfiguration Management Getting your configuration management right is the difference between pleasant, scalable microservice architecture and a configuration mess like you have not seen it before.\nThere are multiple ways to fight this- Spring Cloud Config and storing configuration in the environment are among the most popular options.\nThis section, like others in this article, could easily be an article (or a series) in itself. I can’t possibly tell you enough about configuration management here, so I strongly advise you to research this area.\nService Discovery / Load Balancing One of the key benefits of microservice architectures is its scalability. You may easily scale any part of the system to meet the demand. Regardless if this is done dynamically or not, you will need to know how to route traffic between all these different services.\nService Discovery and Load Balancing are the two very common practices that should be employed here. You may argue that this is not a truly cross-cutting concern and that individual microservices should not be aware of this setting. I agree with you, but someone needs to take care of it. Especially if it impacts service deployment configuration.\nUse of shared libraries Ok, you may be aware of that, but it still comes up surprisingly often when talking with developers new to microservices. The question is: “Can we have shared libraries between microservices?”.\n*“…Yes! Of course, you can!”*Just make sure that you are not introducing coupling and dependencies. Utility libraries that are versioned well (maybe with SEMVER) and don’t introduce coupling are perfectly fine.\nUse of shared domain model This is the real cross-cutting concern… Can we have a shared domain model?\nMany developers will tell you – *absolutely no!*I don’t fully agree here. While you shouldn’t share the exact same code between a server and a client API, I think it is perfectly ok to share client API domain model. Again, as long as you are not forcing others to use it.\nI have written a more nuanced answer to this question in my Code reuse in microservices architecture – with Spring Boot for Scott Logic.\nTo reiterate: You should not share the code between the Server API and a client. You can create a shared Client API library that is versioned and maintained well. This can be shared, but sharing should not be enforced.\nI found this approach to avoid most drawbacks of straight-up sharing and limit the problems with never-ending copy-paste of often simple, yet verbose code.\nAutomated Testing If you are delivering a system, rather than a single microservice, you will be interested in the interactions between these services. I found that in microservices architecture, these interactions can be very brittle if not tested properly.\nIf you can get your product/project/company to adopt consumer-driven contracts across the services, you will quickly see the benefits. The two technologies that I recommend looking up here are:\nPact – if you want to go the mostly technology agnostic route. Spring Cloud Contract – if you enjoy the Spring ecosystem. Spring Cloud Contract also works with non-Spring and non-JVM projects! Both are a great choice!\nSummary Have no illusion- these are not all the cross-cutting concerns that you may encounter in the wild when working with microservices. It turns out that when you deliver a system as a whole, even autonomous and independent teams should sometimes talk to each other!\nIn order to deal with concerns like that I recommend that you consider:\nHosting a regular technical-forum when team representatives can meet and discuss problems of these natures. If you don’t need this formalised, just talk between teams about these issues. Don’t pretend that cross-cutting concerns don’t exist in microservices systems. I hope this was an interesting read. If you can think about other cross-cutting concerns that come up in microservices architectures a lot, let me know on twitter or in the comments!\n","permalink":"https://e4developer.com/posts/microservices-and-cross-cutting-concerns/","summary":"\u003cp\u003eWhen thinking about microservices, we mostly imagine autonomous teams working on independent services. Despite all that independence, things such as log aggregation and security benefit from a system level thinking. In this article, I will discuss these concerns and give my advice on how to approach them.\u003c/p\u003e\n\u003cp\u003eThere are different kinds of microservices systems. Some of them are truly independent when every microservice is nearly indistinguishable from a third-party API, other- because of either necessity or practicality, rely on shared infrastructure or concepts.\u003c/p\u003e","title":"Microservices and cross cutting concerns"},{"content":"You might have noticed that I like reading books. I have recently read *“Algorithms to Live By: The Computer Science of Human Decisions”*which absolutely fascinated me! The book mentions a famous optimal stopping (Wikipedia) problem called Secretary Problem. In this blog post, I will explain it and then we will have some fun simulating it with Java. Let’s see if we can find a solution by brute force!\nSecretary Problem Defined Imagine that you need to hire a secretary. Imagine now that you have 100 candidates that you are going to interview. Because you are a perfect interviewer, you can compare every single person against everyone else that you have seen so far. After the interview, you have to either hire the person or reject. If you reject, you can’t change your mind. You win if you have managed to hire the best candidate out of the whole lot.\nWikipedia defines the problem more formally:\nThere is a single position to fill. There are n applicants for the position, and the value of n is known. The applicants, if seen altogether, can be ranked from best to worst unambiguously. The applicants are interviewed sequentially in random order, with each order being equally likely. Immediately after an interview, the interviewed applicant is either accepted or rejected, and the decision is irrevocable. The decision to accept or reject an applicant can be based only on the relative ranks of the applicants interviewed so far. The objective of the general solution is to have the highest probability of selecting the best applicant of the whole group. This is the same as maximizing the expected payoff, with payoff defined to be one for the best applicant and zero otherwise. Why is this fascinating? Because it is not trivial and not too dissimilar from other decisions we make in life! Buying a house, finding a life-partner, staying in your career… All of these can be viewed as optimal stopping problems, or even as a variation of the Secretary Problem.\nSimulating the Secretary Problem with Java I decided to have some fun and model the situation in Java. I will start by creating a SecretaryProblem class:\npackage com.e4developer.secretary; import java.util.Random; public class SecretaryProblem { private double[] candidates; private double best = Double.MIN_VALUE; private final Random random; private SecretaryProblem(Random random){ this.random = random; } /** * Generating random secretary problem based on a given size * @param size * @return */ static SecretaryProblem generate(int size, Random random){ SecretaryProblem sp = new SecretaryProblem(random); sp.candidates = new double[size]; for(int i = 0; i \u0026lt; size; i++){ double v = sp.random.nextDouble(); if(sp.best \u0026lt; v) sp.best = v; sp.candidates[i] = v; } return sp; } public double[] getCandidates() { return candidates; } public double getBest() { return best; } } Now, I know that the optimal solution to this is to keep interviewing candidates, skipping everyone and at some point make a decision that we are ready to commit. Once we are ready, we will hire the next person that is better than everyone seen so far. This is implemented in my SecretaryProblemSolver class:\npackage com.e4developer.secretary; public final class SecretaryProblemSolver { private SecretaryProblemSolver(){} /** * Solving the secretary problem by committing to the best so far * after specified hire point * @param sp * @param hirePoint * @return */ public static boolean simpleSolve(SecretaryProblem sp, int hirePoint){ double bestSoFar = Double.MIN_VALUE; for(int i = 0; i \u0026lt; sp.getCandidates().length; i++){ if(sp.getCandidates()[i] \u0026gt; bestSoFar) { bestSoFar = sp.getCandidates()[i]; if(i \u0026gt; hirePoint) return bestSoFar == sp.getBest(); } } return sp.getCandidates()[sp.getCandidates().length-1] == sp.getBest(); } } To finish the attempt at finding the solution, I wire-up the code in the Main class:\npackage com.e4developer.secretary; import java.util.Random; public class Main { public static void main(String[] arg){ System.out.println(\u0026#34;Welcome to the secretary problem\u0026#34;); final int problemSize = 100; final double sampleSize = 100000; final Random random = new Random(7); for(int i = 1; i \u0026lt; problemSize; i++){ double success = 0; for(int j = 1; j \u0026lt; sampleSize; j++){ SecretaryProblem sp = SecretaryProblem.generate(problemSize, random); boolean solved = SecretaryProblemSolver.simpleSolve(sp, i); if(solved) success++; } System.out.println(i+\u0026#34;, \u0026#34;+success/sampleSize); } } } Secretary Problem with Java – the results After tunning the code for 100 different commit-point, I came up with the following success rates:\nI have found that committing after the 37th candidate works best and gives us about a 37% chance of finding the best candidate!\nWhat does the math say? Secretary problem is solved and well understood. We know that we should always commit after seeing about 37% of candidates and that this would give us about a 37% chance of success! Great to see that our little Java brute-force confirms that (it means we did it right).\nWhy 37%? Well… 1/e equals about 0.367879. Why is this the magical stopping point? If you have stomach for some hardcore mathematics, I refer you to Wikipedia here!\nIt is worth noting that this ~37% rule for a stopping point and the rate of success holds for any sizes of the candidate pool- be it 100 or 1000000!\nThe really fascinating thing is that our adventure with the secretary problem does not have to end here. We can easily modify the problem and look at how this impacts the solution and resulting curve. Modelling can often reveal insights that can take longer to discover with a strictly mathematical approach.\nSummary I hope you enjoyed this little write up on the Secretary Problem. I am planning to return to it in the future and see how we can model modified versions of the problem.\nIn the meantime, if you like problems like that, I recommend you to check out “Algorithms to Live By: The Computer Science of Human Decisions” which is so far my favourite book I read this year! I will definitely be borrowing more from it and this is not the last attempt at modelling that you will see on this blog.\nTill the next time!\n","permalink":"https://e4developer.com/posts/simulating-the-secretary-problem-with-java/","summary":"\u003cp\u003eYou might have noticed that I like reading books. I have recently read *“Algorithms to Live By: The Computer Science of Human Decisions”*which absolutely fascinated me! The book mentions a famous \u003ca href=\"https://en.wikipedia.org/wiki/Optimal_stopping\"\u003eoptimal stopping (Wikipedia)\u003c/a\u003e problem called Secretary Problem. In this blog post, I will explain it and then we will have some fun simulating it with Java. Let’s see if we can find \u003cem\u003ea solution\u003c/em\u003e by brute force!\u003c/p\u003e\n\u003ch2 id=\"secretary-problem-defined\"\u003eSecretary Problem Defined\u003c/h2\u003e\n\u003cp\u003eImagine that you need to hire a secretary. Imagine now that you have 100 candidates that you are going to interview. Because you are a perfect interviewer, you can compare every single person against everyone else that you have seen so far. After the interview, you have to either hire the person or reject. If you reject, you can’t change your mind. You win if you have managed to hire the best candidate out of the whole lot.\u003c/p\u003e","title":"Simulating the Secretary Problem with Java"},{"content":"Being productive and motivated- who wouldn’t want that? I used to think that some people are just bornmore motivated, that some people simply are this way. “The War of Art” is an eye-opening book. I think that if I have never read it- it is unlikely that I would manage to write two articles a week for this blog ever since starting in January 2018. Here is a short story on my road to productivity.\nThe old status quo – what was wrong? I don’t consider myself particularly lazy. I have managed to graduate from university and I never had problems working. I am probably rather average here as far as the software profession goes.\nThe difficulties were not with day to day work or smaller tasks. What I have problems doing is completing bigger, personal projects. Writing a successful Xbox game, iPhone App, numerous web applications, starting a blog and keeping it updated… I have tried it all before, but somehow never completely finished.\nI am not saying here that there is no value in trying many things. I did not feel too bad about it, because each of these things taught me a lot… However, there was always this question- what if I managed to finish what I started? Could I have written an iPhone App that would make good money? Could this blog become successful? I will never know with my previous attempts…\nWhat changed? I have stumbled upon *“Soft Skills: The Software Developer’s Life Manual”*by John Z. Sonmez, a fascinating book by a very interesting person.\nIt is quite funny how I found this book, as I was basically checking if someone wrote an article that was too similar to what I was planning to write (Soft Skills for Software Developer for Scott Logic blog) and I saw the Amazon listing for the book…\n“Soft Skills for Software Developer” If you work in software and you care about your career- I can’t recommend this book highly enough. It deserves its own review here (which I will get to at some point), but in short- this is a book for you if you want to know about:\nInterviews Career advancement Freelancing Startups Remote working Marketing Blogging Speaking Learning Mentoring This is me listing up interesting topics before even getting to the half of this book What did I get out of the book? I got motivatedto finally start something, to push myself. I was fascinated by the book and I decided that I will try to improve myself and do something that I wanted to do for a long time…\nStarting this blog I decided to start this blog. I decided that this time will be different- I will write regularly and I will make it something valuable to others and myself.\nWhen you start something like this blog (or a software project), you start with lots of motivation… This motivation often dries up in a few weeks or months.\nThis time, it would be different- I promised myself to stick to a schedule of a minimum of 2 articles per week and I will stick to it. This time I had a secret weapon…\nJohn Sonmez I don’t know if John will ever read this, but if he does- thank you, John, I am not sure I will manage to stay focused without reading your book and watching your videos.\nIf you don’t know who John Sonmez is, you can find him on Twitter, on YouTube, or read one of his two popular books – “Soft Skills for Software Developer” and “The Complete Software Developer’s Career Guide”.\nFor me, reading his articles and listening showed me that really, I don’t have any excuse for being lazy or not meeting my targets. I won’t be able to convey this message as well as John does, so I am referring you to the original material. Check out his YouTube channel.\nIn a few of his videos and in the “Soft Skills for Software Developer” John refers to “The War of Art” by Steven Pressfield as one of his all-time favourite books… This was the next stop on my road to productivity.\n“The War of Art” If you want an immediate “kick in the ass” to start working, pick up “The War of Art” by Steven Pressfield and read it. The book is very short, around 160 pages with a lot of white space. You can read it in an evening and your life may change.\nSeriously, if you want to start finishing what you started- buy, borrow and read this book.\nThe book is about overcoming resistance. The resistance is what keeps us from writing. Resistance is what stops us from doing our art– be it writing software blog, completing a renaissance painting or working on our iPhone Application.\nThere are three parts to this book:\nBook One – Resistance – Defining the Enemy: This is where we learn what is stopping us from creating. Book Two – Combating Resistance – Turning Pro: This is the essence of what is required to prevail. Book Three – Beyond Resistance – The Higher Realm: A bit of unexpected ending, sort of “invocation of the muse”. You may have mixed feeling about the book three (it is even acknowledged in the foreword), but it is worth to read the whole book regardless.\nThe “War of Art” changed my approach to work. for good.\n“Amateurs sit and wait for inspiration, the rest of us just get up and go to work.”\nStephen King\nSummary I wrote this article to share with you the things that changed my life for the better. I hope they will have a similar impact on you if you choose to follow that path.\nWriting this blog is my main project for 2018. Next year I am planning to continue that, but with one article per week (rather than two) and focus on a different large project. With this experience, I feel ready.\n","permalink":"https://e4developer.com/posts/my-road-to-productivity-start-finishing-and-producing/","summary":"\u003cp\u003eBeing productive and motivated- who wouldn’t want that? I used to think that some people are just \u003cem\u003eborn\u003c/em\u003emore motivated, that some people simply are this way. \u003cem\u003e“The War of Art”\u003c/em\u003e is an eye-opening book. I think that if I have never read it- it is unlikely that I would manage to write two articles a week for this blog ever since starting in January 2018. Here is a short story on my \u003cem\u003eroad to productivity\u003c/em\u003e.\u003c/p\u003e","title":"My road to productivity - start finishing and producing"},{"content":"In my career as a software developer, I have interviewed over 100 people. Most of these interviews were face to face interview involving coding on paper or a white-board. In this article, I want to give you practical advice on how to approach these interviews.\n“Whiteboard”, or “paper” based coding technical interviews, are the interviews in which you are asked to write code, either on a whiteboard or a piece of paper in front of you.\nThere is an idea that these interviews are difficult, or that they are very different than coding with an IDE. In reality, this is because most people, somehow strangely, never practice for them!\nThe good news is- you can practice and prepare for them, which will make these kinds of interviews your strength. If you do, you will stand out from other candidates and will be more likely to get that job that you want.\nPractice coding interview questions on a computer First of all- don’t assume that you can cheat these interviews by some tricks. The best thing you can do is spend some hours actually practising interview style questions on a computer.\nI have written an article about keeping you coding skills sharp with HackerRank. I consider it a great approach and something that you should definitely do before attending a whiteboard or paper-based interview.\nIf you feel that you may need to prepare even more, or you are interviewing for a company famous for very difficult whiteboard interviews, I strongly recommend you to have a look at “Cracking the Coding Interview” which I reviewed some time ago.\nPractice coding interview question on paper or a whiteboard Assuming that you can program and you know the theory enough to pass the interview if it was with an IDE, why is whiteboard coding still a problem for people? It is because most people do it only during the interview!\nThe solution to this problem is really simple… Try solving a few problems on a whiteboard if you have access to one. If you don’t just try solving them on paper, without an IDE. Really, it is that simple.\nI guarantee you that once you do it a few times, all these interviews will become much less intimidating.\nHow to effectively code on paper or whiteboard Even if you practice a bit, it is useful to know some practical tips related to coding on paper:\nChoose short variable names – seriously, it will save you a lot of time and effort Don’t immediately close that { bracket – many candidates write } bracket somewhere halfway down the page, then they stress about the space left. Have the basic algorithm worked out before you start coding – when working with IDE people often use the code to come up with a solution. It does not work so well on paper. Know what you want to write before you start writing Be smart about the available space – make sure to start from the top of the page/board and choose the size of your writing wisely. Best if you can fit on the page and this is easier when you maximise the space available Talk through your thinking as you write – this way it will be easier for the interviewers to follow your progress. As a bonus, you may get some directions and a collaborative feel to the whole exercise Don’t get hung up on details – if you can’t remember a method name on a standard library, just choose something similar. If you get called on that, be honest- most interviewers won’t mind you forgetting a method name. “Debug” your code with some examples – it is easy to make an obvious mistake on paper – forgetting to return a variable or something like that. If you run through your code with an example inputs (out loud), you can catch these mistakes. Leave some space between the lines – it will make much simpler to add code there if you need to. How to solve a paper or whiteboard question The tips above are very much related to actually writing code on paper or a whiteboard. I want to repeat some popular advice about solving these questions, that is all too easy to forget.\nTriple check that you understand the question – one of the worst scenarios is to misunderstand the question and attempt solving something else, as the puzzled interviewers watch you. Know the primitive types and basic data structures – you should know your int, String, HashMap etc. pretty well before the interview. Make sure that you know the APIs. Understand the big O notation – it is really not that difficult. Here is a simple article by Rob Bell that explains the basics. The fastest general sort is O(n log(n)). Don’t panic – believe it or not, most of the time, your interviewers want you to succeed. Ask them questions, try not to panic. Practice before – as I mentioned already, practice these kinds of questions beforehand. The more you know the easier it is. Summary Paper-based or whiteboard coding does not have to be stressful. If you come prepared, you may even find it entertaining! There are no shortcuts to success, but without the right preparation, you may still try hard and fail. Don’t let that happen to you.\nTo finish on a light note – “The Fizz Buzz from Outer Space” by DailyWTF is still the funniest story about a paper-based interview that I ever heard!\nGood luck in yoru next interview!\n","permalink":"https://e4developer.com/posts/tips-for-whiteboard-and-paper-coding-interviews/","summary":"\u003cp\u003eIn my career as a software developer, I have interviewed over 100 people. Most of these interviews were face to face interview involving coding on paper or a white-board. In this article, I want to give you practical advice on how to approach these interviews.\u003c/p\u003e\n\u003cp\u003e“Whiteboard”, or “paper” based coding technical interviews, are the interviews in which you are asked to write code, either on a whiteboard or a piece of paper in front of you.\u003c/p\u003e","title":"Tips for “whiteboard” and “paper” coding interviews"},{"content":"In the past three years, I was involved in developing microservices architectures. In Java, with Spring Boot and in Groovy with Grails. Perhaps risking some outrage, I will compare my experience in Java and Groovy, Spring Boot and Grails and give my opinion on what works best.\nI will start looking purely at languages, as Spring Boot will work happily with Groovy and Java alike.\nWhat I like in Java-based microservices Java is extremely popular Java is a commonly used language. If you trust the TIOBE Index, it is as common as it gets:\nWith such popularity, it is (relatively) easy to find Java experts and experts’ advice on the Internet. This really matters when trying to assemble a team that will succeed. It makes it easier.\nJava and its annotation syntax are easy to read I am a huge fan of Java annotations. They make writing microservices much easier, especially when powered by a framework like Spring Boot.\nI find annotations syntax a great way of making a convention explicit. When looking at Spring code, even if you don’t know Spring well, a presence of an annotation alerts you to something special going on.\nThere is a lot of value in readability, especially when working on more complicated systems.\nJVM is a great platform This sounds a bit paradoxical, but another great advantage of Java is that… you are not only stuck with Java!\nUsing Java for your main application does not mean that you can’t use a different language elsewhere. One interesting application is to use Groovy for your build- with Gradle, or testing- with Spock.\nOne of my colleagues was even on the project where the team used Scala for writing tests code. With JVM- you have options.\nWhat I don’t like in Java-based microservices Semicolons It does not sound like a big deal, but after writing software with JavaScript, Swift, Groovy, Kotlin… you start to forget those. Of course, there is the IDE, but it gets annoying.\nJava is on the verbose side This is improving with the addition of *var,*but still- Java feels like a rather verbose language for microservices development. This is again, rather minor.\nAfter testing with Groovy and Spock, Java feels lacking I have already mentioned using different language for testing than for the application itself. After writing tests with Spock, JUnit feels clunky. I really recommend you check out the project if you find parametrized testing in JUnit not as good as they could be!\nWhat I like in Groovy-based microservices Groovy lets you write less code It’s great that something that takes many lines in Java can often be expressed in about half the amount in Groovy. It makes development feel quick and pleasant.\nGroovy improves on the verbosity of Java and after using it for a while, you miss these features in Java.\nTesting and build tools in Groovy are great Spock made on me such an impression that it bears repeating for the third time- great framework check it out.\nI have used Maven for most of my career and Grails came as a pleasant surprise. While I still use Maven by default, if you need more complicated builds- Grails feels cleaner.\nWhat I don’t like in Groovy-based microservices Groovy is not a popular language I know this is not a popularity contest, but it matters. It is quite difficult to find experience Groovy developers. The mitigating factor is the similarity to Java- most Java developers learn Groovy very quickly.\nGroovy lets you write less code With Groovy it is possible to start writing too little code. You can omit the return keyword in many cases. Things like that, while sometimes useful, can lead to some very unreadable code.\nIf you take the magic that Groovy lets you perform and mix it with developers who are using the language for the first time… you can find yourself quickly in troubles!\nWeak typing is a trap Weak typing can be useful, but I find it brings mostly problems when working with REST-based microservices.\nTo find a more elaborate critique of weak typing I invite you to read the excellent answer by Bartosz Milewski to the question *“Why were weakly-typed programming languages created?”*that was posted on Quora.\nTo avoid misuse, I would prefer Groovy not to allow weak-typing. The mitigating factor- in a well-disciplined team, you can work around this.\nIs Spring Boot a good choice for microservices architecture? Half of this blog is around that topic, so I will just say yes- Spring Boot is great. If you want to see multiple more nuanced opinions, check the Spring Boot section of this blog.\nIs Grails a good choice for microservices architecture? I don’t like to criticize frameworks, especially ones that have been around as long as Grails was. It has its place and numerous world-class systems were delivered using it.\nHowever, I don’t consider Grails a good choice for microservices architecture. Why? Because it is too complex. Spring Boot is an already complex framework and Grails builds on top of it.\nI wrote about “The Quest for Simplicity in Java Microservices” and I don’t think Grails really makes the cut here. If you want something simple, well suited for microservices, I would either go with a “naked” Spring Boot (possibly even using Groovy) or go with Micronaut if you like the Grails style of development.\nI am not saying that it is impossible to develop microservices with Grails- I have done it myself! I am merely suggesting, that there are frameworks that provide a better experience. Micronaut is being developed by people involved in Grails, so to some extent- I am not alone in that opinion.\nSummary Both Java and Groovy are good choices for developing microservices. If you decide to give Groovy a go- make sure that you know what you are signing for. Going with Groovy you can still choose Spring Boot as your framework. I would recommend doing that rather than going with Grails.\nAs with any such discussion- your circumstances may differ, so make sure you chose what works in your case.\n","permalink":"https://e4developer.com/posts/java-vs-groovy-for-microservices/","summary":"\u003cp\u003eIn the past three years, I was involved in developing microservices architectures. In Java, with Spring Boot and in Groovy with Grails. Perhaps risking some outrage, I will compare my experience in Java and Groovy, Spring Boot and Grails and give my opinion on what works best.\u003c/p\u003e\n\u003cp\u003eI will start looking purely at languages, as Spring Boot will work happily with Groovy and Java alike.\u003c/p\u003e\n\u003ch2 id=\"what-i-like-in-java-based-microservices\"\u003eWhat I like in Java-based microservices\u003c/h2\u003e\n\u003cp\u003e\u003cimg loading=\"lazy\" src=\"/posts/java-vs-groovy-for-microservices/images/java-new-logo.png\"\u003e\u003c/p\u003e","title":"Java vs Groovy for Microservices"},{"content":"*“Scrum Mastery: From Good To Great Servant-Leadership” –*a book that I picked up recently based on it being the number one selling book about Agile Methodologies (from Amazon). I also wanted a fresh view on that role, given that I work in a Scrum team myself. Was it worth my time? Definitely!\nI did not really know what to expect from this book. There are plenty of “Scrum manuals” out there, which more often than not, describe some idealised realities.\nThis book is real. Reading it, I could relate my own experience to examples from the book. Especially when you start questioning yourself- can Scrum really work for us? The book shows how Scrum Master can make it work in these, less than ideal, circumstances.\nEasy to read, full of practical advice- is that enough? In case you still can’t decide if it is for you (or you are just interested in the key message) I will explain the major themes of the book.\nIt is not all about You! To be blunt, being Scrum Master is not about you! It is about the team that you are going to work for.\nAs a Scrum Master, you work FOR the team.\nMe\nThe key metric of your success is not how by the book you run your ceremonies, or how nice is your JIRA dashboard. The key metric of success is how well the team is delivering value.\nThis is quite humbling, but also very rewarding perspective. Your main role is to make others shine. Don’t worry about yourself. If you do this job right, the team will notice.\nThis is not to say that you need to please the team all the time, or simply do exactly what they ask you every time. You want to help your team, but that may involve sometimes asking questions or challenging their opinions.\nThe role of the Scrum Master is also about leadership. It is, however the servant leadership kind, rather than the more direct/managing type.\nDiplomacy at work The second theme that runs through the book is the importance of diplomacy (although it is rarely called that).\nAs a Scrum Master, you are likely changing the status quo. You could be challenging the way things work in your organisation, or challenging some internal team processes. To do that effectively, you need to be respected and trusted.\nThe book gives numerous techniques by which you can tactfully challenge the team and inspire them to improve. It is important to not forget that this is about the team success, not implementing your pet ideas, so a healthy balance and staying open-minded can help a lot.\nOn the topic of challenging the company culture and policies- this may be especially difficult, but also especially useful. How do you change processes, while, in theory, not having any authority to do that? The role of Scrum Master rarely grants such authority. Well, the book gives some advice on how to approach that without getting fired.\nPower to the team All that diplomacy and servant leadership have a common goal – empowering the team! This is really what this book is all about, how to manoeuvre a challenging environment to end up with a well working, empowered team.\nAskin questions, listening, inspiring, being creative, you are full-time team’s secret weapon. You want to be the team’s Mjölnir (Thor’s, mythical hammer, giving him powers).\nAnother measure of your long-term success is how well the team does without their secret weapon? Are they going to fall akin to Gollum losing the One Ring, or prevail like Thor (the Marvel one) realising that they can prevail without their precious hammer? The Lord of the Rings or Marvel Cinematic Universe knowledge is not necessary to enjoy the book.\nSummary If you work as a Scrum Master, or even if you only work in a Scrum team *“Scrum Mastery: From Good To Great Servant-Leadership”*makes for a very interesting read. It is the book that I wish every Scrum Master was familiar with!\n","permalink":"https://e4developer.com/posts/how-to-be-a-good-scrum-master-start-with-this-book/","summary":"\u003cp\u003e*“Scrum Mastery: From Good To Great Servant-Leadership” –*a book that I picked up recently based on it being the number one selling book about Agile Methodologies (from Amazon). I also wanted a fresh view on that role, given that I work in a Scrum team myself. Was it worth my time? Definitely!\u003c/p\u003e\n\u003cp\u003eI did not really know what to expect from this book. There are plenty of \u003cem\u003e“Scrum manuals”\u003c/em\u003e out there, which more often than not, describe some idealised realities.\u003c/p\u003e","title":"How to be a good Scrum Master? Start with this book!"},{"content":"I love playing chess. I also love programming. How to marry the two? How about playing chess against a chess engine (chess playing program) that I have designed and wrote? If you are intrigued- read on! This is the first blog post in the series about building my chess engine in Java.\nThe rules of chess I assume that you are more-less familiar with the rules of chess. You have two “armies” facing each other, making a move after a move, until one side is guaranteed to capture the enemy king (the checkmate), or neither side can win- a draw.\nIf you need to refresh your memory on the exact rules, there are Rules of chess on Wikipedia. If you are a less serious player I would point out a few overlooked and misunderstood rules:\nCastling – different both ways Promoting a pawn to any piece once they cross the board En-passant – special capture with a pawn There are also a few rules regarding checks – you have to defend. Assuming you have the basics covered, let’s look at designing a clean Object Oriented Java Chess Engine.\nDesigning the Java chess engine What are the key elements of building a chess engine? In fact, it is not that complicated. You need the following parts:\nA way of storing the chess position A way of generating all legal positions that could follow from given positions (all legal moves) A way of evaluating every position (how good the move is) An algorithm to effectively select the move, searching the tree of possibilities Ideally, you would want all these steps to be done with a reasonable efficiency- looking at millions of possible resulting positions before selecting a move.\nIn the design phase, I will focus on the approach to the first two parts of the problem.\nFirst, it would be good to have a Class for storing the overall state of the game. This will be the main Class that we are interacting with. Let’s call it the ChessGame.\nLet’s have positions describe the flow of the game (could have moves, but I like the immutability for now). The currentPosition will be… well, the current position on the board. We want to be able to makeMove – to apply a new position and to find the bestMove.\nI hope that all makes sense so far. At that point let’s think what are we trying to achieve in this design:\nSimple to understand and read Efficient to work with Focused on immutability for easy multithreading Following these guidelines, this is the design for the Position that I came up with:\nWe will have the 64 squares represented as an array of bytes. Each byte corresponding to a piece (white or black) or an empty square. This is done for efficiency, although it is not the most efficient possible implementation. We will look at that later.\nThere is also a number of flags attached to a Position. We need to know if castling happened if there is an en-passant possible and which move we are on. This is to determine if it is white or black move. We are not dealing with the 50 move draw rule here.\nHere I would ask for your ideas. Can we do much better than an array of bytes while maintaining readability and performance? If you have good ideas, let me know on twitter (@e4developer) or in the comments.\nThe last part is looking at the pieces. All we need to know are the possible moves, leaving us with a rather simple design:\nAll we need to get from each Piece is the set of legal transitions possible from a given position. These are going to be simply static methods, as pieces are actually recorded as bytes (for now).\nThis leaves me with the following initial design to implement:\nJava Chess Engine – next steps So far I have talked about the basic design. This leaves me with the following next steps to carry out:\nImplement the design and share the implementation with you on my blog Start working on the evaluatePosition() function – this will be a lot of fun! Implement a min-max algorithm using alpha-beta pruning. This is mandatory for a decent chess engine. If you can think of improvements over this plan, I will be keen to listen!\nSummary The goal of this article was to present the steps required to implement an Object Oriented Java Chess Engine. I also want to take you on the journey towards the implementation. I hope you will enjoy the ride!\n","permalink":"https://e4developer.com/posts/designing-an-object-oriented-chess-engine-in-java/","summary":"\u003cp\u003eI love playing chess. I also love programming. How to marry the two? How about playing chess against a chess engine (chess playing program) that I have designed and wrote? If you are intrigued- read on! This is the first blog post in the series about building my chess engine in Java.\u003c/p\u003e\n\u003ch2 id=\"the-rules-of-chess\"\u003eThe rules of chess\u003c/h2\u003e\n\u003cp\u003eI assume that you are more-less familiar with the rules of chess. You have two “armies” facing each other, making a move after a move, until one side is guaranteed to capture the enemy king (the checkmate), or neither side can win- a draw.\u003c/p\u003e","title":"Designing an Object Oriented Chess Engine in Java"},{"content":"You might have seen recently some Pluralsight promotion on my page. There are two reasons for this. Reason number one- I became a Pluralsight Affiliate and I earn by promoting their website. Reason number two- I use Pluralsight myself and I think it is a great place to level up your skills. In this article, I will explain why you should consider it too!\nWhat is Pluralsight? Pluralsight is an online training platform… Ok, come on, I am not going to bore you to death here!\nIf you want to learn something new like Kotlin, Spring Cloud, Docker etc. Pluralsight is one of the best places to do that! They offer great online videos that you can watch either on their website or a pretty good App.\nAnother great thing that they offer is the “Skill IQ”. These are online tests that you can take to get a reality check of how good you are with different tech. You can see my Java level here.\nWell, my Java is pretty solid, but C# is a different story (as expected):\nI really like these”Skill IQ” tests, as they give you a bit of a reality check of where you stand and what you have to work on. They have probably around 100 of them.\nHow is it different from other online courses? You don’t buy individual courses on Pluralsight. Once you are enrolled, you can watch any video you please, making it especially useful during periods of intense learning. I recommend trying it out for free, possibly signing up for a month and seeing how it goes!\nTo make the most value out of it, make sure to use it while you pay for it. I find it unlikely that I would be watch videos for 12 months in a row, but you don’t have to! You can cancel your subscription and come back a few months later when you are ready to learn again.\nWhen you go beyond the free trial it costs around ~$35 per month. This sounds like a lot, but I regularly spend more on books. The idea is to make the most of that month. I feel that if I watch 2 or 3 courses, then these $35 are well worth it!\nWhat are my favourite courses and content on Pluralsight? As they say, the proof is in the pudding, so let me give you some of my favourite courses from Pluralsight:\nJava Microservices with Spring Cloud: Developing Services – an Outstanding course that will teach you everything you need to know to make use of Spring Cloud for developing services. It is the first part of two course series on Spring Cloud. I really enjoyed it.\nJava Microservices with Spring Cloud: Coordinating Services – the second part of the two-part series on Spring Cloud microservices. Especially useful, as it is with the microservices coordination where the problems usually start.\nSpring Boot: Efficient Development, Configuration, and Deployment – if you are using Spring Boot and would like to get to know it better- this is the course for you. It is not an introduction to Spring Boot, but rather a review of its slightly more advanced features. Short enough to watch it within the free trial.\nDocker Deep Dive – do you really want to understand Docker. Nigel Poulton is the man! He takes you on a deep and fascinating dive into the Docker world. Containers and DevOps are the future (and the present for many of us), so this one comes highly recommended.\nKotlin Fundamentals – Kotlin is all the rage on the JVM these days. If you don’t know the language already, this is one of the ways to get you started.\nApplying Concurrency and Multi-threading to Common Java Patterns – Pluralsight has a staggering amount of content related to core Java. Concurrency has always been one of the more difficult topics in Java. This course explains it well in a really short amount of time.\nSummary There are many ways to keep up to date with tech. I have written about Twitter and Reddit before. If you think you may enjoy high-quality video content with the ability to test your progress, give Pluralsight a (free) try!\nI am confident sharing this with you as I a Pluralsight user myself and can vouch for the quality of the courses out there!\n","permalink":"https://e4developer.com/posts/learning-java-spring-microservices-with-pluralsight/","summary":"\u003cp\u003eYou might have seen recently some Pluralsight promotion on my page. There are two reasons for this. Reason number one- I became a Pluralsight Affiliate and I earn by promoting \u003ca href=\"https://www.e4developer.com/ps-a-free-trial\"\u003etheir website\u003c/a\u003e. Reason number two- I use Pluralsight myself and I think it is a great place to level up your skills. In this article, I will explain why you should consider it too!\u003c/p\u003e\n\u003ch2 id=\"what-is-pluralsight\"\u003eWhat is Pluralsight?\u003c/h2\u003e\n\u003cp\u003ePluralsight is an online training platform… Ok, come on, I am not going to bore you to death here!\u003c/p\u003e","title":"Learning Java / Spring / Microservices with Pluralsight"},{"content":"*“All problems in computer science can be solved by another level of indirection” –*David Wheeler. In this article, I look at my favourite quotes relating to software development and what we can learn from them.\n“All problems in computer science can be solved by another level of indirection”\nDavid Wheeler\nI opened the article with this one, as it is one of the truest and most insightful quotes that I have ever read about computer science. You can say that this is another way of expressing Dependency Inversion Principle which states:\nHigh-level modules should not depend on low-level modules. Both should depend on abstractions. Abstractions should not depend on details. Details should depend on abstractions. “Any fool can write code that a computer can understand. Good programmers write code that humans can understand.”\nMartin Fowler\nMost new programmers often attempt to chase the wrong thing. They try to make their code as short as possible or sacrifice time and legibility with premature optimisations. Martin Fowler is here to remind us what is really important- making your code easy to understand! I would extend that to APIs, Libraries and Framework design.\n“Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live.”\nJohn F. Woods\nIf the gentle words of Martin Fowler were not convincing enough, here is a quote from John F. Woods! Of course, this is an exaggeration, but exposed to the scrutiny of code review- you don’t want to be THAT guy!\n“organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.”\nM. Conway\nThis, rather famous quote, is often called the “Conway’s Law”. It comes from a paper that Conway wrote and has another quote (by Eric S. Raymond): “If you have four groups working on a compiler, you’ll get a 4-pass compiler.”\nWhat many organization hope is that by creating independent, empowered teams, they will end up with independent, well-built microservices. While there is some truth to it, software engineering is not that simple! With three independent teams, you just may receive three clusters of “distributed monoliths” rather than nice independent microservices.\n“Be conservative in what you send, be liberal in what you accept”\nJon Postel\nDesigning good APIs is crucial for success in the world of services (micro or not). Jon Postel offers a good advice (originally about TCP protocol) on how to approach your communication.\nYou should try to be as stable as possible while being able to consume APIs even if they change slightly. This way you end up with robust systems.\n“The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform… But it is likely to exert an indirect and reciprocal influence on science itself.”\nAda Lovelace\nAda Lovelace (1815-1852) is often credited for being the first programmer. She was programming an Analytical Engine rather than a modern computer, but she already recognized its potential.\nIt is fascinating to see that someone over 150 years ago already expected machines to influence science. A fascinating insight.\n“You can mass-produce hardware; you cannot mass-produce software; you cannot mass-produce the human mind.”\nMichio Kaku\nThis quote should hang on every project manager’s wall! It is still far too common in the 21st century for people to imagine that they can just throw bodies at a software project and it will all work out!\nWe are constantly seeing small, high performing teams outcompete massive, slow-moving corporations. With the Agile, Lean, DevOps movements there is a light at the end of the tunnel, but there is still much educating to do.\nShare that quote with your managers and colleagues, make sure that they don’t forget that!\n“Software is a great combination between artistry and engineering.”\nBill Gates\nI really like this quote from Bill Gates. It sums up how many of us feel about software development. There is skill, engineering, but also an element of beauty in it. Let’s not forget about it!\nFinal Word I hope you enjoyed these quotes as much as I did. If you have your favourite quotes, tweet them at me – @e4developer. Until next time!\n","permalink":"https://e4developer.com/posts/my-favourite-software-development-quotes/","summary":"\u003cp\u003e*“All problems in computer science can be solved by another level of indirection” –*David Wheeler. In this article, I look at my favourite quotes relating to software development and what we can learn from them.\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003e\u003cem\u003e“All problems in computer science can be solved by another level of indirection”\u003c/em\u003e\u003c/p\u003e\n\u003cp\u003eDavid Wheeler\u003c/p\u003e\n\u003c/blockquote\u003e\n\u003cp\u003eI opened the article with this one, as it is one of the truest and most insightful quotes that I have ever read about computer science. You can say that this is another way of expressing \u003ca href=\"https://en.wikipedia.org/wiki/Dependency_inversion_principle\"\u003eDependency Inversion Principle\u003c/a\u003e which states:\u003c/p\u003e","title":"My favourite Software Development Quotes"},{"content":"Spring Boot is the most popular Java framework for developing microservices. In this article, I will share with you the best practices for working with Spring Boot that I have gathered by using it in professional development since 2016. I base these on my personal experience and writings of recognized Spring Boot experts.\nIn this article, I focus on practices specific to Spring Boot (most of the time, also applicable to Spring projects). If you want to learn about the Java best practices, I recommend “Effective Java” which I review in a separate article.\nThe following best practices are listed in no particular order.\nUse Auto-configuration One of the flagship features of Spring Boot is its use of Auto-configuration. This is the part of Spring Boot that makes your code simply work. It gets activated when a particular jar file is detected on the classpath.\nThe simplest way to make use of it is to rely on the Spring Boot Starters. So, if you want to interact with Redis, you can start by including:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-data-redis\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; if you want to work with MongoDB, you have:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-data-mongodb\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; and so on… By relying on these starters, you are relying on a tested and proven configuration that is going to work well together. This helps to avoid the dreaded Jar Hell (nice dzone.com article linked).\nIt is possible to exclude some classes from the Auto-configuration, by using the following annotation property: @EnableAutoConfiguration(exclude={ClassNotToAutoconfigure.class}), but you should do it only if absolutely necessary.\nOfficial documentation on Auto-configuration can be found here.\nUse Spring Initializr for starting new Spring Boot projects This best practice comes from Josh Long (Spring Advocate, @starbuxman).\nSpring Initializr (https://start.spring.io/) gives you a dead easy way to start a new Spring Boot project and load it with the dependencies you may need.\nCreating your application with the Initializr ensures that you are picking up the tested and approved dependencies that will work well with Spring auto-configuration. You may even discover some new integrations that you were not aware exist.\nConsider creating your own auto-configuration for common organizational concerns One more from Josh Long (Spring Advocate, @starbuxman)- this one is for power users.\nIf you are working in an organization that relies heavily on Spring Boot and you have common concerns that need to be solved, you can create your own auto-configuration.\nThis task is more involved, so you need to consider when the benefits are worth the investment. It is easier to maintain a single auto-configuration than multiple bespoke configurations, all slightly different.\nIf you are publishing your library to open-source, providing a Spring Boot configuration will greatly ease the adoption for thousands of users.\nStructure your code correctly While allowing you a lot of freedom, there are some basic rules worth following then laying out your source code.\nAvoid using the default package. Make sure that everything (including your entry point) lives in a well-named package. This way you will avoid surprises related to wiring and component scan. Keep your Application.java(your entry Class) in the top-level source directory. I recommend keeping Controllers and Services together in modules that are oriented around functionality, but this is optional. Some very good developers recommend keeping all Controllers together. Stick to one style! Keep your @Controller’s clean and focused Controllers are supposed to be very thin. You can read about the Controller pattern explained as part of GRASP here. You want Controllers to coordinate and delegate, rather than to execute actual business logic. Here are the key practices:\nControllers should be stateless! Controllers are by default singletons and giving them any state can cause massive issues. Controllers should not execute business logic but rely on delegation. Controllers should deal with the HTTP layer of the application. This should not be passed down to Services. Controllers should be oriented around a use-case / business-capability. To go deeper here, would be to start discussing the best practices for designing REST APIs. These are worth learning about regardless if you want to use Spring Boot.\nBuild your @Service’s around business capabilities Services are another core concept in Spring Boot. I find it best to build services around business capabilities/domains/use-cases, call it what you want.\nApplications with Services called something like AccountService, UserService, PaymentServiceare much easier to deal with than those with DatabaseService, ValidationService, CalculationService etc.\nYou could decide to go with a 1-to-1 mapping between Controllers and Services. That would be ideal. That does not mean, that Services can’t use each other!\nMake your database a detail – abstract it from the core logic I used to be unsure of how to best treat database interaction in Spring Boot. After reading “Clean Architecture” by Robert C. Martin, it is much clearer to me.\nYou want your database logic abstracted away from the Service. Ideally, you don’t want a Service to know what database it is talking to. Have some abstractions that encapsulate the persistence for your Objects.\nRobert C. Martin argues passionately for making your database a “detail”. That means not coupling your application to a specific database. It used to be very rare that you would ever switch databases. I have noticed that with Spring Boot and modern microservices development- things move much quicker.\nKeep your business logic free of Spring Boot code With the lessons from the “Clear Architecture” in mind, you should also protect your business logic. It is very tempting to mix all sorts of Spring Boot code there… Don’t do it. If you resist the temptation, you will keep your business logic reusable.\nIt is common for parts of services to become libraries. These are much easier to create if you don’t have to remove a lot of Spring annotations from your code.\nFavour Constructor Injection This one comes from Phil Webb (Current Lead of Spring Boot, @phillip_webb).\nOne way to keep your business logic free from Spring Boot code is to rely on Constructor Injection. Not only is the @Autowired annotation optional on constructors, you also get the benefit of being able to easily instantiate your bean without Spring.\nBe familiar with the concurrency model One of the most popular articles I ever wrote is “Introduction to Concurrency in Spring Boot”. I believe the reason for this is that this area is often misunderstood and ignored. With that, comes problems.\nIn Spring Boot- Controllers and Services are by default Singletons. That introduces possible concurrency problems if you are not careful. You are also usually dealing with a limited thread-pool. Familiarise yourself with these concepts.\nIf you are using the new WebFlux style of Spring Boot applications, I have explained how that works in “Spring’s WebFlux / Reactor Parallelism and Backpressure”.\nExternalise and mature your configuration management This point goes beyond Spring Boot, although it is a common problem that happens when people start creating multiple similar services…\nYou can manually deal with configuring a Spring application. If you are dealing with dozens of Spring Boot applications you need to mature your configuration management.\nI recommend two main approaches:\nUse a configuration server, something like Spring Cloud Config Store all your configuration in environment variables (that could be provisioned based on git repository) Either of these options (the second one more) requires you to dab a bit in the DevOps area, but this is to be expected in the world of microservices.\nProvide global exception handling You really need a consistent way of dealing with exceptions. Spring Boot provides two main ways of doing that:\nYou should use HandlerExceptionResolver for defining your global exception handling strategy. You can annotate your Controllers with @ExceptionHandler. This can come useful if you want to be specific in certain cases. This is pretty much the same as in Spring and the Baeldung has a detailed article on Error Handling for REST with Spring that is well worth reading.\nUse a logging framework You are probably aware of that, but you should be using a Logger for logging rather than doing it manually with System.out.println(). This is easily done in Spring Boot with pretty much no configuration. Just get your logger instance for the class:\nLogger logger = LoggerFactory.getLogger(MyClass.class); This is important, as it will let you set different logging levels as necessary.\nTest your code This is not Spring Boot specific, but it warrants a reminder! Test your code. If you are not writing tests, then you are writing legacy code from the get-go.\nIf someone else comes to your codebase, very quickly it may become dangerous to change anything. This can be even riskier when you have multiple services depending on each other.\nSince there are Spring Boot best practices, you should consider using Spring Cloud Contract for your Consumer Driven Contracts. It will make your integration with other services much easier to work with.\nUse testing slices to make your testing easier and more focused This one comes from Madhura Bhave (Spring Developer, @madhurabhave23).\nTesting code with Spring Boot can be tricky- you need to initialize your data layer, wire numerous services, mock things… It actually does not have to be that hard! The answer is- use testing slices.\nWith testing slices, you can wire-up only parts of your applications as necessary. That may save you a lot of time and ensure that your tests are not coupled to things that you are not using. There is a blog post titled Custom test slice with Spring Boot 1.4 from spring.io that explains that technique.\nSummary Thanks to Spring Boot, writing Spring based microservices became easier than ever. I hope that with these best practices, your implementation journey will not only be quick but also more robust and successful in the long run. Good luck!\nThank you! I would like to thank the following people for helping me make this article better:\nMarcin Grzejszczak (@MGrzejszczak) – for retweeting my blog post and getting the attention of the Spring team Josh Long (@starbuxman) – for the feedback and additional best practices Phil Webb ( @phillip_webb) – for the feedback and additional best practices Madhura Bhave (@madhurabhave23) – for the feedback and additional best practices Thanks a lot for that guys! Spring Boot has a really fantastic community!\n","permalink":"https://e4developer.com/posts/spring-boot-best-practices/","summary":"\u003cp\u003eSpring Boot is the most popular Java framework for developing microservices. In this article, I will share with you the best practices for working with Spring Boot that I have gathered by using it in professional development since 2016. I base these on my personal experience and writings of recognized Spring Boot experts.\u003c/p\u003e\n\u003cp\u003eIn this article, I focus on practices specific to Spring Boot (most of the time, also applicable to Spring projects). If you want to learn about the Java best practices, I recommend \u003cem\u003e“Effective Java”\u003c/em\u003e which I \u003ca href=\"https://e4developer.com/posts/effective-java-microservices-require-effective-java/\"\u003ereview in a separate article\u003c/a\u003e.\u003c/p\u003e","title":"Spring Boot - Best Practices"},{"content":"Have you heard of Reddit? It is an amazing place on the Internet, where people share links and ideas. It has also a very active programming community. I visit Reddit daily- you can learn a lot from people there and have some interesting discussion. In this article, I will tell you how I use Reddit and why I consider it so valuable.\nReddit is not the only social media that I use for learning about technology. I have previously written about using Twitter for staying up to with the Java world.\nReddit is different than Twitter. While Twitter is all about individuals, Reddit is all about communities.\nProgramming communities on Reddit Reddit works by having you join multiple subreddits. As you join them, you are effectively building your Reddit home page. My current homepage looks something like this:\nOk, so you like the idea of joining these programming communities (subreddits), but you are not sure where to start? Let me recommend a few for you:\nJava Themed Java – This is an extremely active and popular subreddit where you can find anything related to programming in Java. This is not a channel for learning Java or asking for help. You share news, articles and discuss ideas here. Learnjava – This is where you should discuss learning Java. Perfect for students and newcomers to the language. Javahelp – This is where you ask for help with Java. Kind of like a StackOverflow with a smaller community and laxer rules. General Programming: Programming – The channel to discuss everything and anything related to programming. Worth following, as it has a broader scope than the more dedicated Java subreddit. DevOps – Subreddit with a focus on this new fancy thing called DevOps. Want to talk about Kubernetes? Join it. Machinelearning – Teaching machines how to think is really fascinating. Join this subreddit to see how the present and the future of machine learning may look like. Other worth considering: ProgrammerHumor – For your daily dose of laughs! Honestly, this is one of the funniest places on the Internet for programmers. The voting system makes sure that usually, the quality content comes out on top. Chess – If you like playing chess, or want to get better, join in! You don’t have to use Reddit only for programming content. The website is really great for other fields, interest and hobbies as well.\nReddit Highlights Still not sold on that Reddit thing? I will share with you some of the really good content I found there recently:\nOperator += is broken for Strings in Java 9 and 10 – did you know about it?! I found out from Reddit and it changed my view on the importance of Java Long Term Support releases.\nWhy I Moved Back from Gradle to Maven – someone shared this article on Reddit. It is worth a read. What is even more interesting is the discussion that takes place in the comments on Reddit.\nMachine Learning in Java – you don’t need to learn R to do some machine learning.\nIt’s perfectly fine to only code at work, don’t let anyone tell you otherwise – are you ready for 1000+ comments arguing both cases?\nThere is much more and if you want the “best” stuff – Check the top pages of Java subreddit. Of coure it is worth exploring other subreddits as well.\nShould you participate? I started using Reddit only passively, simply reading and following the discussions. There is nothing wrong with that. On the other hand, once you get comfortable with the community and their web culture, you will most likely want to contribute and share your favourite resources.\nI like Reddit because it is nice to have people to talk about topics that I care about. While Twitter is great, Reddit facilitates discussion much better. Some things simply need a few more characters…\nSummary Reddit is the best place I know to discuss programming articles and ideas. If you have not tried it yet, I hope this article at least clarified what it is all about. Given how easy it is to follow Reddit (I read the homepage as I would any other website) I recommend you give it a go! What is there to miss?\n","permalink":"https://e4developer.com/posts/reddit-the-java-goldmine/","summary":"\u003cp\u003eHave you heard of Reddit? It is an amazing place on the Internet, where people share links and ideas. It has also a very active programming community. I visit Reddit daily- you can learn a lot from people there and have some interesting discussion. In this article, I will tell you how I use Reddit and why I consider it so valuable.\u003c/p\u003e\n\u003cp\u003eReddit is not the only \u003cem\u003esocial media\u003c/em\u003e that I use for learning about technology. I have previously written about \u003ca href=\"https://e4developer.com/posts/how-to-stay-up-to-date-with-java-and-tech-use-twitter/\"\u003eusing Twitter for staying up to with the Java world\u003c/a\u003e.\u003c/p\u003e","title":"Reddit - the Java goldmine"},{"content":"In this article, we will look closer at a fascinating open source project. Meet Redis! You may be familiar with Redis already, in that case, you may be interested in the different use cases it has for microservices architecture. Read on to see how this “in-memory data structure store, database, cache, and message broker” can make your system better!\nWhat is Redis? I already revealed that in the introduction. To repeat (using redis.io own words):\nRedis is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker.\nIn practice, that means that we can use Redis to store and retrieve data. We can do that with a very good performance, potentially (not necessarily) sacrificing some durability of that data.\nRedis is also an open source project- that’s great news! We will be able to get hands-on without any obstacles in our way.\nRedis and Docker As with most things these days, I really recommend that you use Docker for playing with Redis. If you have never used Docker before I wrote a short intro.\nI have used Redis version 4.0.8 in this article. If you don’t care about which version of Redis you run, but you want the latest, you can start the container with the following command:\ndocker run --name my-redis -d redis\nThis will expose port 6379 (the default) for connecting to Redis.\nYou can also run your Redis with a persistent storage to a volume /data if you wish:\ndocker run --name my-redis -d redis redis-server --appendonly yes\nIf you don’t know about volumes yet, Docker has a good documentation about them.\nSpeaking of documentation, if you actually consider running Redis in production, I strongly advise you read the official docker repo documentation for Redis.\nYou now know how to run Redis with Docker. For standalone installation, please check redis.io/download – this should be up to date!\nRedis and Spring Boot Redis is extremely popular and plays nice with most libraries. Since I enjoy and recommend Spring Boot, I will show you how easily the two integrate.\nThe only dependency you need to add to your POM file is:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-data-redis\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; One of the main benefits of using spring-boot-starter-data-redis is that you don’t have to do any configuration yourself. Spring autoconfiguration shines once again. You can define properties in your application.properties as per usual in Spring Boot.\nLet’s make a simple application where we can store notes and retrieve them using Redis.\nFirst, we need Note object defined:\npackage com.e4developer.springredis; import org.springframework.data.annotation.Id; import org.springframework.data.redis.core.RedisHash; @RedisHash(\u0026#34;Note\u0026#34;) public class Note { @Id private final String id; private final String message; public Note(String id, String message) { this.id = id; this.message = message; } public String getId() { return id; } public String getMessage() { return message; } } This object is annotated as @ReisHash so it can get persisted in Redis using NoteRepository:\npackage com.e4developer.springredis; import org.springframework.data.repository.CrudRepository; import org.springframework.stereotype.Repository; @Repository public interface NoteRepository extends CrudRepository\u0026lt;Note, String\u0026gt; {} The one last thing to add is the NoteController to bring these things together:\npackage com.e4developer.springredis; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.http.HttpStatus; import org.springframework.http.ResponseEntity; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.PathVariable; import org.springframework.web.bind.annotation.PostMapping; import org.springframework.web.bind.annotation.RequestBody; import java.util.Optional; @Controller public class NoteController { @Autowired private NoteRepository noteRepository; @PostMapping(\u0026#34;/notes/{id}\u0026#34;) public ResponseEntity\u0026lt;String\u0026gt; setNote(@RequestBody String message, @PathVariable(\u0026#34;id\u0026#34;) String id){ Note note = new Note(id, message); noteRepository.save(note); return new ResponseEntity\u0026lt;\u0026gt;(HttpStatus.CREATED); } @GetMapping(\u0026#34;/notes/{id}\u0026#34;) public ResponseEntity\u0026lt;optional\u0026lt;Note\u0026gt;\u0026gt; readNote(@PathVariable(\u0026#34;id\u0026#34;) String id){ Optional\u0026lt;Note\u0026gt; note = noteRepository.findById(id); return new ResponseEntity\u0026lt;\u0026gt;(note, HttpStatus.CREATED); } } Now you can save and retrieve Notes using GET and POST requests! We made use of Spring Data here that I looked at before.\nIf you want to work more closely with Redis and Spring, you should familiarize yourself with the Spring Data Redis documentation.\nSince using Redis is nice and easy, let’s look at the most common use-cases for it in the microservices world.\nRedis – Microservices Cache One of Redis’s strongest suits its the caching capability it provides. If you need to temporarily store data and share it between microservices – Redis is the go-to choice.\nWith the fine-grained eviction policies at your disposal and stellar performance, it is hard to see why you would need anything else?\nThis is where many developers under-appreciate it. While Redis is a great caching solution, it offers much more…\nRedis – In-memory Database As I already mentioned, Redis describes itself as *“in-memory data structure store, database”*first. It is in fact, an outstanding NoSQL product.\nMany services can benefit from using in-memory databases. They often significantly outperform normal databases on most metrics.\nA common use case would be a large amount of read-only data that is rarely modified. You can have a traditional database maintaining the record and in-memory database providing read-only performance. This is just one of many in-memory database use cases.\nRedis – Database If you need to have your data persisted- don’t fret. Redis also have you covered.\nRedis uses a clever mix of:\nRDB – snapshots of your dataset at specified intervals AOF – persisted log of every write operation received by the server To provide speed and data reliability as required. This is an interesting topic and the Redis website contains a good comparison of RDB and AOF pros and cons.\nOne thing is clear- if you wish to use Redis for persistence, the project is more than ready to provide a robust solution to the problem.\nRedis – Message Broker The last, perhaps the most surprising, capability that Redis provides is its Pub/Sub mechanism.\nIf you are interested in microservice choreography- it is a viable solution. Using Redis may be simpler than setting up RabbitMQ and it is definitely simple than dealing with the intricacies of Kafka.\nRedis PUB/SUB model is very simple and nicely covered in the short section on the official website.\nIf you are unsure what microservices choreography is, I have recorded a short video about it:\nSummary I wrote that article to promote Redis as a great tool for building microservices solutions. Having a tool that can be your cache, database and a messaging component in one can make your microservices life simpler.\nRemember, Redis is much more than simply a caching solution!\n","permalink":"https://e4developer.com/posts/using-redis-in-microservices-architecture/","summary":"\u003cp\u003eIn this article, we will look closer at a fascinating open source project. Meet Redis! You may be familiar with Redis already, in that case, you may be interested in the different use cases it has for microservices architecture. Read on to see how this “\u003cem\u003ein-memory data structure store, database, cache, and message broker”\u003c/em\u003e can make your system better!\u003c/p\u003e\n\u003ch2 id=\"what-is-redis\"\u003eWhat is Redis?\u003c/h2\u003e\n\u003cp\u003eI already revealed that in the introduction. To repeat (using \u003ca href=\"https://redis.io/\"\u003eredis.io\u003c/a\u003e own words):\u003c/p\u003e","title":"Using Redis in Microservices Architecture"},{"content":"You may know that playing chess is my passion. The name of this website- E4developer comes from the chess move I start my games with – e4.\nThere is another passion that runs in my family – chess collecting. I have spent quite a lot of time in the past few weeks documenting that collection and putting it up online. Check it out yourself – www.chesscollecting.com\nDon’t worry- I will not write less here! I am just excited to share this with you, as it may inspire you to do that personal project that you always wanted. Creating www.chesscollecting.com was an immensely satisfying experience.\nHere are some of my favourite photos from the collection:\nIf you would like to see more beautiful sets and photos, check www.chesscollecting.com. If you know of other beautiful chess sets worth tracking down- let me know!\n","permalink":"https://e4developer.com/posts/my-family-chess-collection/","summary":"\u003cp\u003eYou may know that playing chess is my passion. The name of this website- \u003cem\u003eE4developer\u003c/em\u003e comes from the chess move I start my games with – e4.\u003c/p\u003e\n\u003cp\u003eThere is another passion that runs in my family – chess collecting. I have spent quite a lot of time in the past few weeks documenting that collection and putting it up online. Check it out yourself – \u003ca href=\"https://www.chesscollecting.com/collection/\"\u003ewww.chesscollecting.com\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eDon’t worry- I will not write less here! I am just excited to share this with you, as it may inspire you to do that personal project that you always wanted. Creating \u003ca href=\"https://www.chesscollecting.com/collection/\"\u003ewww.chesscollecting.com\u003c/a\u003e was an immensely satisfying experience.\u003c/p\u003e","title":"My Family Chess Collection"},{"content":"I have recently been writing a lot about microframeworks and my enthusiasm for them. Even though I think they are amazing, they are not always the answer. In this article, I will explore use cases, where a fully featured framework may be just what you need.\nWhat is the difference between framework and microframework? Before going deeper into the argument, let’s make sure we are clear what we mean when talking about microframeworks.\nSpark Java and Javalin are two great examples. Very simple frameworks, that focus on helping you deal with REST APIs and basic server operations. Not much more. These two examples both are under 15,000 lines of code (Spring Boot Core is about 100,000 before we count all the dependencies).\nFull-fledged frameworks like Spring, Grails, different flavours of Enterprise Java (now Jakarta EE) bring much more to the table. Let’s look at some of the core Spring features:\nAdvanced dependency injection Advanced aspect-oriented programming (AOP) Controllers that can process requests differently with a lot of help from the framework Multiple optional dependencies for working with data, security, messaging etc. There are of course many more differences and features, but this article is not solely about spring.\nWhen to use a fully featured framework? When you need to! It really boils down to that. We have identified some core missing features and if you really need them- go for it. Let me give you some examples where frameworks like Spring shine:\nWhen you are building a large application. Ok, let’s say the dreaded word- monolith. These large applications don’t have to be ugly. Frameworks like Spring can make you build something maintainable. When you need a specific capability that the framework provides. If you want to use Spring Data or Spring Security for example. Or you want to make use of multiple capabilities provided by Spring Cloud. This may sound controversial, but I think we should lean towards these most popular solutions when working on somebody’s else systems. If you are a consultant (like myself), it is easier to leave your client with a mainstream framework, than a bespoke solution. I think these are the three main cases for using a fully featured framework. To summarise:\nWhen building large applications When required a specific capability When consulting (although this may change when some microframeworks will become mainstream) There is one more thing though…\nHow to use frameworks smartly? If you decided to use a fully featured framework, do it smartly. I realized the importance of this after reading Clean Architecture by Robert C. Martin. Uncle Bob writes a very memorable thing there:\n“Don’t marry the framework”\nRobert C. Martin\nThe idea behind this is that when using a framework, it is very easy to get very coupled with the framework code. Your application stops looking like a User Account Service (for example) and starts to look like a Large Grails Application.\nMarrying the framework means that you will become inseparable from the framework. This is especially risky when developing something larger, where a rewrite is not an option… “For better, for worse, for richer, for poorer, in sickness and in health…” Do you trust your framework like that?\nHow to integrate with a framework smartly? Here is some advice:\nSeparate your business logic from the framework code as much as possible. It is ideal to have most of the business logic be free of the framework code. This way you can always re-use it. When using things such as database integration, etc. consider following the Clean Architecture advice. Separate this code with a layer of abstraction. Make use of the Dependency Inversion Principle to abstract away the framework code. I really think that Clean Architecture hits the nail on the head here:\nTo use the framework well, to avoid marrying it- strive to keep your Use Cases and Entities free of the framework code.\nSummary I promote microframeworks a lot. That does not mean that they are always the right tool for the job. There are situations where a fully featured framework like Spring may be a better tool to solve your problem.\nIf you use frameworks smartly, you may get all the benefits and avoid most of the drawbacks of tight-coupling with the framework. When designing software systems- always think what tool will work best for your use case.\n","permalink":"https://e4developer.com/posts/when-to-use-a-java-framework-like-spring/","summary":"\u003cp\u003eI have recently been writing a lot about microframeworks and my enthusiasm for them. Even though I think they are amazing, they are not always the answer. In this article, I will explore use cases, where a fully featured framework may be just what you need.\u003c/p\u003e\n\u003ch2 id=\"what-is-the-difference-between-framework-and-microframework\"\u003eWhat is the difference between framework and microframework?\u003c/h2\u003e\n\u003cp\u003eBefore going deeper into the argument, let’s make sure we are clear what we mean when talking about microframeworks.\u003c/p\u003e","title":"When to use a Java framework like Spring?"},{"content":"Thank you for reading my newsletter. I have some exciting news to share with you!\nI have started E4developer YouTube Channel! You can visit and subscribe now. I posted a couple videos already, I recommend the Microservices Explained – Orchestration vs Choreography where I explain the key differences between the two. There is much more to come, as I explain in this article. Please give me some feedback.\nThis month was quite busy and the newsletter is a little later than usual. Enjoy the articles!\nWhat connects successful architectures:\nThe Key to a Successful Software Architecture\nLooking at the goldmine of microservices resources:\n“Awesome Microservices” – discover technologies and theory\nReview and my thoughts on Clean Architecture – a very interesting book:\nDiscovering “Clean Architecture” with Uncle Bob\nVery popular article about chasing simplicity in microservices development. We meet microframeworks again:\nThe Quest for Simplicity in Java Microservices\nGoing back to basics and defining what microservices architectures really are:\nMicroservices Definition\nLooking at the value and the role of technical architects:\nTechnical Architects – the role, the job and the value\nMy favourite book on beating code interviews and getting better at algorithms:\n“Cracking the Coding Interview” – learn that and much more!\nBuilding a functional API with Spring, based on CIA World Factbook:\nCIA World Factbook API with Functional Spring\nSee what you might have missed if your knowledge is mainly academical:\nComputer Science Degree – The Missing Pieces\nTake a simple quiz and see how mature are your microservices:\nMicroservices Maturity Quiz\nAnswering the question of Java relevance in 2018:\nShould I Learn Java in 2018\n","permalink":"https://e4developer.com/posts/e4developer-newsletter-july-2018-number-5/","summary":"\u003cp\u003eThank you for reading my newsletter. I have some exciting news to share with you!\u003c/p\u003e\n\u003cp\u003eI have started \u003ca href=\"https://e4developer.us17.list-manage.com/track/click?u=97fa615b3733782df577bfa18\u0026amp;id=17264dbf18\u0026amp;e=3dd8e42692\"\u003eE4developer YouTube Channel\u003c/a\u003e! You can visit and subscribe now. I posted a couple videos already, I recommend the \u003ca href=\"https://e4developer.us17.list-manage.com/track/click?u=97fa615b3733782df577bfa18\u0026amp;id=5a3a571b40\u0026amp;e=3dd8e42692\"\u003eMicroservices Explained – Orchestration vs Choreography\u003c/a\u003e where I explain the key differences between the two. There is much more to come, as I explain in \u003ca href=\"https://e4developer.us17.list-manage.com/track/click?u=97fa615b3733782df577bfa18\u0026amp;id=2fe0b3a0df\u0026amp;e=3dd8e42692\"\u003ethis article\u003c/a\u003e. Please give me some feedback.\u003c/p\u003e\n\u003cp\u003eThis month was quite busy and the newsletter is a little later than usual. Enjoy the articles!\u003c/p\u003e","title":"E4developer Newsletter – July 2018 – Number 5"},{"content":"Recently I have read and reviewed “Clean Architecture“ by Robert C. Martin. Very entertaining book. It made me think about the main quality that good software architectures exhibit. What is this quality? It is the existence of clear boundaries and well-defined modules. If you don’t agree with me- keep reading and I am sure we will find some common understanding.\nDivide and Conquer The famous maxim of Divide and Conquer (Latin: dīvide et imperā) is defined as:\ngaining and maintaining power by breaking up larger concentrations of power into pieces that individually have less power than the one implementing the strategy\nIt applies to software in a rather intuitive way. Often when imagining the system as a whole it may seem daunting. Implementing a large banking application can seem like an insurmountable task! Implementing a REST API or building messaging layer may seem more reasonable.\nCreating well-defined components can help us direct our efforts appropriately and build some parts of the system to a production-level quality before others are even started.\nThe challenge here is building the right thing and not building too much (You aren’t gonna need it – YAGNI). This is why the understanding of the final architecture is important. It can guide our decisions regarding individual components, even when implementing them in isolation.\nDelegating Control Components can often be implemented in isolation. That means that we can have multiple people or even teams working on them separately, at the same time, without stepping on each other toes.\nSince we started with a semi-military maxim, we can continue with that theme. This delegation of control is a strategy that helped the Prussian army prevail in the Austro-Prussian and Franco-Prussian Wars. Helmuth von Moltke recognised the importance of delegating control. He favoured:\ndirectives stating his intentions, rather than detailed orders\nThis is all, really fascinating stuff and if you are interested in the history of military strategy, you can read more about that in Moltke’s theory of war summarised on Wikipedia.\nImplementing large software systems is not war (although sometimes it feels like a military campaign with many fronts), but the lesson stands. With components, we can guide the evolution of the system by stating intentions and responsibilities, while delegating the ownership and implementation to another team.\nStriving for Simplicity So far, it seems that components are the silver bullet (I don’t know why I refer to the military all the time today!) that can solve (bullets “solve” problems in a rather grim way) all of our architectural problems.\nThe problem with components is that they should be simple on two levels, yet it can be difficult to achieve.\nComponents – internal simplicity We are talking about high-cohesion here. We want the components to have a clear purpose and clear reasons for changing. This is often difficult to achieve, hence everyone in a development team should be aware of these guiding principles. SOLID principles apply here.\nComponents – simplicity in integration To achieve true simplicity, you need to look at dependencies between components. If you end up with a set of inter-dependent, tightly coupled components, you have failed at achieving simple (clean) architecture.\nComponents – being pragmatic The difficulty in designing good components is linked to the balance, between making the components practical for the developers here and now, and making the components “good citizens” of our architecture diagram.\nBeing pragmatic is important. If you want to focus on one thing, make sure that your component is clearly separated and can always be replaced. Simplicity in integration often trumps internal simplicity.\nKeeping Options Open I have pointed the clear separation of a component as more important than the simplicity of a component itself. Why is that? In order to be successful, we need to be able to react to changes. The “soft” is part of “software” for precisely that reason. We build software (rather than hardware) because we want an option to change as a part of the product.\nWhat can help us keep our options open:\nWell defined components Clear separation between components Keeping the “details isolated” – database, frameworks, etc. as much as practical This is really what Clean Architecture explains very well. Good architectures are architectures that can change and adapt.\nComponents – the Enablers of Architecture I don’t want to give you an impression that components are only important for the Clean Architecture as described by Robert C. Martin. In fact, let’s look about some other architectures:\nMicroservices – “Small Autonomous services that work together, modelled around a business domain” (definition by Sam Newman) – You can see components as microservices here. Hexagonal Architecture – “Hexagonal Architecture is a form of application architecture that promotes the separation of concerns through layers of responsibility.” (definition from culttt.com) – I can’t see how you can separate concerns and build laters of responsibility without components. Data, context, and interaction (DCI) – “The paradigm separates the domain model (data) from use cases (context) and Roles that objects play (interaction). “ (from Wikipedia) – Separation of different concepts is once again at the heart of the architecture. Service-oriented architecture (SOA) – I don’t think I need a quote here. SOA has “components” (as Services) pretty much in the name. Any architecture styles different than BBOM (Big Ball of Mud) that I can think of In order to implement any of the above architecture styles, you need skill in designing components and separating concerns. I hope you agree with me by now- building components is the key skill to any successful software architecture.\nSummary I wanted to write about components to remind ourselves what lies at the heart of software development. Developing and designing systems that work and can change.\nArchitects are often guilty of thinking on the too high level and forgetting about components and developers tend to focus too much on the code, sometimes forgetting the bigger picture. This is a reminder for all of us- let’s take good care of our components!\n","permalink":"https://e4developer.com/posts/the-key-to-a-successful-software-architecture/","summary":"\u003cp\u003eRecently I have read and \u003ca href=\"https://e4developer.com/posts/discovering-clean-architecture-with-uncle-bob/\"\u003ereviewed \u003cem\u003e“Clean Architecture\u003c/em\u003e“\u003c/a\u003e by Robert C. Martin. Very entertaining book. It made me think about the main quality that good software architectures exhibit. What is this quality? It is the existence of clear boundaries and well-defined modules. If you don’t agree with me- keep reading and I am sure we will find some common understanding.\u003c/p\u003e\n\u003ch2 id=\"divide-and-conquer\"\u003eDivide and Conquer\u003c/h2\u003e\n\u003cp\u003e\u003cimg loading=\"lazy\" src=\"/posts/the-key-to-a-successful-software-architecture/images/roman-empire-1024x827.jpg\"\u003e\u003c/p\u003e\n\u003cp\u003eThe famous maxim of Divide and Conquer (Latin: \u003cem\u003edīvide et imperā\u003c/em\u003e) is defined as:\u003c/p\u003e","title":"The Key to a Successful Software Architecture"},{"content":"Half a year after creating this blog I felt that there was something missing… I enjoy sharing my passion for the JVM ecosystem and software development with You, my dear readers, but I could never talk to You. I decided to change it and give YouTube a try. You can follow my efforts on the brand new E4developer YouTube channel!\nThe idea behind the E4developer YouTube channel Reading articles is great, but humans are social beings. I enjoy listening to others as well as talking to them. I wanted to find a medium where I could talk to You, on a frequent basis, about my passion.\nConferences and meetups are great, but they are not frequent enough and I need to make a talk that appeals to whoever is selecting it. On my YouTube channel, I can talk about whatever I feel is interesting and it is down to my audience (You!) to decide if they are interested.\nOne thing to mention here- I have no experience in any video production, so this is likely to require some learning from me. Don’t worry, I promise to work hard and things will get better as I make more videos!\nWhat videos can you expect on my channel? I will start with a series titled “Microservices Explained” where I take a microservices related topic and give it less-than-10-minutes explanation. I have started with the microservices definition itself (based on my earlier article):\nThis is my first ever attempt at making a video. I hope you enjoy it.\nMoving from here I have some ideas of topics that I will visit probably at least once a week:\nDiscussing good architecture practices Reviewing technology Live coding some examples Talking about people management and leadership Who knows what else? If you have a good idea about what sort of videos you would like to watch- let me know in the comments.\nLet’s start filming! With that, I hope that I got you interested. You can subscribe to my YouTube channel and let’s see what comes next. Wish me luck!\n","permalink":"https://e4developer.com/posts/e4developer-youtube-channel-is-here/","summary":"\u003cp\u003eHalf a year after creating this blog I felt that there was something missing… I enjoy sharing my passion for the JVM ecosystem and software development with You, my dear readers, but I could never talk to You. I decided to change it and give YouTube a try. You can follow my efforts on the brand new \u003ca href=\"https://www.youtube.com/channel/UCct_XHqdxXSYLZl7MfiDAAA\"\u003eE4developer YouTube channel\u003c/a\u003e!\u003c/p\u003e\n\u003ch2 id=\"the-idea-behind-the-e4developer-youtube-channel\"\u003eThe idea behind the E4developer YouTube channel\u003c/h2\u003e\n\u003cp\u003eReading articles is great, but humans are social beings. I enjoy listening to others as well as talking to them. I wanted to find a medium where I could talk to You, on a frequent basis, about my passion.\u003c/p\u003e","title":"E4developer YouTube channel is here!"},{"content":"Wouldn’t it be nice if someone would gather all the best resources, projects, technologies and everything else related to microservices? Yes, it would! The good news is- someone (many people) already did. Let me introduce you to awesome-microservices!\nWhat are “awesome lists”? Someone had an idea of listing the best GitHub projects and resources relating to particular technologies and… putting it on GitHub. This way, such a list can be curated and expanded by other community members in an orderly fashion.\nThis idea caught on so well, that now we even have a curated list of awesome lists. Seriously, go check it out- it is a gold mine of good resources.\nLooking at “Awesome Microservices” I spent some time looking at the Awesome Microservices list and I would like to share with you personal favourites of mine with you.\nYou can learn a lot from this list, just by looking at the capabilities section. If you want to learn something new, it may be necessary to know what is out there, or what is possible. Looking at the headings there, you get a nice list:\nAPI Gateways / Edge Services Configuration and Discovery Coordination and Governance Elasticity Job Schedulers / Workload Automation Logging Messaging Monitoring and Debugging Reactivity Resilience Security Serialization Storage TestingContinuous Integration and Continuous Delivery – this one, points to a whole other Awesome CI/CD DevOps list! I can safely say, that if you know your way around all these capabilities, then you know what you are doing with your microservices architecture! If you don’t know about some of them- the list capabilities section of this awesome list is a great starting point!\nI enjoy reading a lot and the theory section of this list contains links to many good articles. You will find there the microservices article by Martin Fowler (one of the most read articles on microservices I bet!), as well as links to other key documents such as the Reactive Manifesto.\nOne place where this list could be improved is the listing of Java VM frameworks. There is so much happening in this space, that it is difficult for any list to stary up to date. With that in mind…\nMake it more awesome – expand it! This list is on GitHub (rather than a blog) so that you can easily expand it. Suggest new frameworks, new technologies, other interesting articles.\nSince you are reading my blog, I bet this is not the first microservices related article that you have seen. If you remember one that you particularly liked, or you use microservices related tech that is not mention there- read their very simple contributing guidelines and contribute.\nI see the areas where we could improve this list as:\nMore up to date overview of Java VM frameworks More interesting talks on microservices – there are so many good ones out there! More books – there were quite a few good ones released even this year More up to date articles I promise you that I will make some contributions- I hope you will too!\nSummary Awesome lists are a great idea. A way for people to collectively share information on what’s great out there. You can use Awesome Microservices to help you learn about microservices or get inspired.\nGiven the open nature of this effort, I hope you will share some of your awesome discoveries too.\n","permalink":"https://e4developer.com/posts/awesome-microservices-discover-technologies-and-theory/","summary":"\u003cp\u003eWouldn’t it be nice if someone would gather all the best resources, projects, technologies and everything else related to microservices? Yes, it would! The good news is- someone (many people) already did. Let me introduce you to \u003ca href=\"https://github.com/mfornos/awesome-microservices\"\u003eawesome-microservices\u003c/a\u003e!\u003c/p\u003e\n\u003ch2 id=\"what-are-awesome-lists\"\u003eWhat are “awesome lists”?\u003c/h2\u003e\n\u003cp\u003eSomeone had an idea of listing the best GitHub projects and resources relating to particular technologies and… putting it on GitHub. This way, such a list can be curated and expanded by other community members in an orderly fashion.\u003c/p\u003e","title":"“Awesome Microservices” - discover technologies and theory"},{"content":"Recently I have been taking a bit of a step back from microservices and trying to look at systems architecture from a more general perspective. With that mindset, I have picked up “Clean Architecture” (Amazon) by the “Legendary Craftsman” (that’s probably the publisher’s enthusiasm!) Robert C. Martin “Uncle Bob”. What follows is my thoughts and overall review of the book.\nSOLID foundations The book starts quite a bit below the abstract levels of architecture. We are treated to a very entertaining review of the journey from Structured Programming, through Object-Oriented Programming and ending on Functional Programming.\nUncle Bob makes a good argument on why we are unlikely to see any further paradigm change. Each of these styles is characterized by specific restrictions. To paraphrase:\nStructured Programming imposes discipline on direct transfer of control. Think loops. Object-Oriented Programming imposes discipline on indirect transfer of control. Think polymorphism Functional Programming imposes discipline upon variable assignment. Think immutability and pure functions. With the paradigms discussed, the book moved towards SOLID principles.\nIf you have not heard about SOLID, here is the quick break down:\nSRP: The Single Responsibility Principle OCP: The Open-Closed Principle LSP: The Liskov Substitution Principle ISP: The Interface Segregation Principle DIP: The Dependency Inversion Principle What I did not realise is that Robert C. Martin is the author of the SOLID theory, although he did not invent the acronym. That came from Michael Feathers a few years later. This gives Uncle Bob quite an authority to write about these principles. It also provides another surprise…\nThe Single Responsibility Principle is probably not what you think it is! The common misunderstanding is to explain it as “class should do only one thing and do it well”… This is only partially correct. In the book, Uncle Bob provides the more refined explanation: “A module should be responsible to one, and only one, actor”. This makes this principle much more concrete and applicable at different levels of abstraction.\nI really enjoyed this deeper dive into the commonly discussed SOLID principles and deeper insight into their implications. This makes the book not only useful to architects, but also to pretty much any developer. I may be hinting here on the idea, that architecture is also a responsibility of the developers.\nThinking about Components Once the foundations are established, the book moves onto discussing components. Some esoteric theory is introduced (measuring stability and abstraction as an actual metric), but this all leads in the good direction. While I doubt most people will benefit from the actual metrics, the ideas introduced here are very valuable:\nThe Common Closure Principle The Common Reuse Principle The Stable Dependencies Principle The Stable Abstraction Principle There are the theories on which good design practices lay. It is quite a difficult read, so I recommend taking your time with this section. These chapters contain some universal software design truths.\nThe theme of building components stays with us throughout the book. It is really successful in teaching two lessons:\nAlways depend on abstraction. Lower level components should depend on higher level components, never the opposite. Separating components and maintaining boundaries is one of the hallmarks of good architectures “Clean Architecture” is full of good advice on how to reasonably separate components and direct your dependencies. This all culminates in the introduction of what Uncle Bob calls the Clean Architecture.\nThe Clean Architecture Clean Architecture is an actual architecture that Uncle Bob described in The Clean Architecture article posted on the 8thlight company website. If you are interested in details, I recommend reading that blog post (or better, read the book!), if you don’t have a time, this is the picture:\nThis is a simple (in a good way) approach to building software systems. The idea is to be strict about the direction of the dependencies and keep details (such as databases) as far as possible from the actual business rules.\nWhile sharing the title with the book itself, I don’t see it as the most valuable thing in the book! The architecture itself is a process, you need to constantly work to keep it clean. The advice on how to get there seems more valuable than the final picture. The final idea is good, but it is not the difficult part.\nOne thing that I really enjoyed in the book was the Chapter 34: The Missing Chapterwas written by Simon Brown, it takes the ideas from the book and demonstrates them against practicalities of implementing a Java system. It really highlights, that while it is important to know the concepts presented by Uncle Bob, you also have to be able to implement them well!\nDo (not) mind the details Uncle Bob is clear in his writing- the Database is a detail, the Web is a detail, even Frameworks are details. This may sound crazy, as it is both a statement and advice. It is very easy for your framework of choice to define the architecture.\nThis is something that I have been thinking about a lot recently. I have written about the rise of Microframeworks and the quest for simplicity in microservices, as too often I have seen frameworks overshadowing the real architectures. This book makes it very clear (in a very funny way) that you should be very careful when committing to “marrying” a framework “for better or for worse, in sickness and in health…”.\nWhat is architecturally significant? Perhaps the most interesting question that this book opened for me is the architectural significance of different parts of the system. What is architecturally significant? What should be? There is a good argument that even in micro-service (Uncle Bob uses the *“-”*convention, so let’s roll with it) architectures, not every service is significant.\nThis is dangerous and important to remember! Your architecture boundaries are not necessarily where your services boundaries lay. If these are not aligned properly, you may end up with extremely chatty architecture over an expensive boundary. I think we all heard about failed micro-services attempts because of that mistake.\nMake sure that your boundaries are correctly enforced, your components separated and details stay details. You should be on your way to achieving a Clear Architecture.\nSummary This was a very entertaining book. Short chapters make for an easy read. There is a lot of discussions that at first may seem to only apply to monolithic systems or problems of old, but the advice is universal.\nI really recommend the book. Read it with an open mind, and see how some of the timeless advice can be applied to even the most modern of systems.\n","permalink":"https://e4developer.com/posts/discovering-clean-architecture-with-uncle-bob/","summary":"\u003cp\u003eRecently I have been taking a bit of a step back from microservices and trying to look at systems architecture from a more general perspective. With that mindset, I have picked up \u003ca href=\"https://www.amazon.com/gp/product/0134494164/ref=as_li_tl?ie=UTF8\u0026amp;camp=1789\u0026amp;creative=9325\u0026amp;creativeASIN=0134494164\u0026amp;linkCode=as2\u0026amp;tag=e4developer01-20\u0026amp;linkId=e4b2982b894b1cb4cabeeab9dd4c783c\"\u003e\u003cem\u003e“Clean Architecture” (Amazon)\u003c/em\u003e\u003c/a\u003e by the “\u003cem\u003eLegendary Craftsman”\u003c/em\u003e (that’s probably the publisher’s enthusiasm!) Robert C. Martin “Uncle Bob”. What follows is my thoughts and overall review of the book.\u003c/p\u003e\n\u003ch2 id=\"solid-foundations\"\u003eSOLID foundations\u003c/h2\u003e\n\u003cp\u003eThe book starts quite a bit below the abstract levels of architecture. We are treated to a very entertaining review of the journey from \u003cstrong\u003eStructured Programming\u003c/strong\u003e, through \u003cstrong\u003eObject-Oriented Programming\u003c/strong\u003e and ending on \u003cstrong\u003eFunctional Programming\u003c/strong\u003e.\u003c/p\u003e","title":"Discovering “Clean Architecture” with Uncle Bob"},{"content":"There is great value in simplicity. When things are simple, they are easier to understand, easier to extend and easier to modify. They are better. Simplicity is the ultimate compliment you can give to an architecture or a framework. In this article, I look at how four different frameworks- Spring Boot, Javalin, Vert.x and Micronaut; approach this quest for simplicity.\nSimple does not mean easy One of my inspirations for this article was a great presentation by Rich Hickey titles Simple Made Easy.\nThis is the slides from this presentation that really highlights the difference between simple and easy:\nSimple can be easy, but it is not the same thing. Simple is the opposite of Complex, Easy is the opposite of Difficult.\nI will not repeat the whole presentation here (I really recommend you watch it yourself), but to emphasize the points:\nSimplicity is the goal, we want things to not be complex Being easy is beneficial, but if it comes with hidden complexity, it can be very dangerous Let’s take a look at the history of Simplicity and Complexity in Java frameworks.\nEnterprise Java, Spring – Complex and Difficult Before moving to the microservices frameworks, let’s look at where we started.\nBefore microservices, we had two leading approaches for larger serverside applications written in Java: Enterprise Java and the Spring Framework:\nAt the risk of upsetting quite a few people, I consider both frameworks difficult and complex.\nSure, you can make Spring or JavaEE “easy” for yourself by understanding them very well and learning how to use them, but that does not eliminate the underlying complexity.\nIt seems that I am not the only one who thought these were the problems, as JavaEE (currently Jakarta EE) community is busy working on MicroProfile and Spring enthusiasts will be quick to point me to the Spring Boot project.\nI will not focus on MicroProfile here, as it is still relatively new and the transition of Java EE to Jakarta EE is underway. If you are curious, I recommend checking Jakarta EE official site, MicroProfile official site and their GitHub repository.\nThe easy and robust Spring Boot Who does not love Spring Boot? Ok, it is the Internet, so I am sure that quite a few of you don’t! Anyway, Spring Boot was a game changer in the enterprise world. Writing services became really simple.\nSpring Boot also provides simplicity by partitioning the vast Spring ecosystem into small composable parts. Autoconfiguration is the magic that removed huge complexity from service developers.\nDo you ever wonder how autoconfiguration works? Have a look at the source code from 2.0.3.RELEASE. It is very complex, but it is managed entirely by the framework team. They decided to absorb the complexity and did a great job at it!\nWhat about the Spring Framework itself? It can be quite complex, but it also extremely robust. The choice is really down to the developer- what do you include, and what do you stay away from.\nI see the Spring Boot approach to simplicity as:\nDevelopment is very easy to start with The vast complexity of autoconfiguration is handled by the framework team The inherent complexity of the framework The framework complexity can be simplified by relying only on the key parts of it When dealing with very difficult problems, this approach proved itself to be successful. Let’s look at the other frameworks.\nSimplifying things with Micronaut Micronaut is much younger than Spring Boot. At the time of writing, we are at the version 1.0.0.M1, so there is plenty of scope for change.\nMicronaut describes itself as:\nA modern, JVM-based, full-stack framework for building modular, easily testable microservice applications.\nIt bears multiple similarities to Spring Boot. We have:\nDependency Injection Defaults and Autoconfiguration Multiple cloud-native capabilities build in You can clearly see the lessons from Spring Boot and Grails (the lead of Micronaut is Graeme Rocher – creator of Grails). What makes Micronaut interesting then? Once again, referring to the Micronaut documentation:\nFast startup time Reduced memory footprint Minimal use of reflection Minimal use of proxies Easy Unit Testing I would add myself- it is written from scratch with simplicity in mind. The question is- as the framework matures, will it become simply too similar to Spring Boot to matter, or will it manage to preserve its differences.\nThe fact that it is a new project is its greatest asset as well as the greatest risk. Sure it does not depend on arguably heavy and complex Spring, but at the same time- Spring is robust, popular and working well.\nI see the Micronaut approach to simplicity as:\nDevelopment is very easy to start with Attempting to build a simpler solution than Spring Boot while still providing defaults and autoconfiguration The framework supporting it is built from scratch for Micronaut Micronaut is new, so the future is still being decided I really like Micronaut, as it provides a competition for Spring Boot. It attempts a very similar approach, but more streamlined and written with microservices in mind from the start.\nWhat do you do if you want to achieve the ultimate simplicity?\nThe Simple and Easy Javalin… If you want to make your microservices really simple, you should look at microframeworks. Or should you? Let’s look at Javalin as an example of the microframework family.\nSo, what makes Javalin so simple? It is only about 2,000 lines of source code. It is truly a microframework. With such a streamlined code-base you can achieve real simplicity. If you have any difficulties, the source code is simple enough to understand and fix.\nWhat is the price of such simplicity? Javalin does not provide as much as Spring Boot does. You don’t have projects like Javalin Data (my Spring Data introduction) or Javalin Data Flow (my Spring Cloud Data flow into). You don’t even have dependency injection!\nIs being so lightweight problematic? This is an interesting question. These days, with Kubernetes, Service Mesh and other microservices technologies, there is less requirement for complexity in the service itself. I have written about the rise of microframeworks, as I believe that we are just witnessing the beginning of this trend!\nI see the Javalin approach to simplicity as:\nMinimalistic code base Very simple interaction with the service Minimal viable set of features for microservice “Do it yourself” approach Can you combine the simplicity and style of the Javalin approach with a more fully featured framework? Sure you can! We will finish this showcase looking at Vert.X.\nChasing simplicity in Vert.X Vert.X is the second most popular framework from the ones mentioned here (after Spring Boot). It is not targeted only at microservices (neither is Spring Boot) and it describes itself as:\nEclipse Vert.x is a tool-kit for building reactive applications on the JVM.\nThe reactive/functional approach has simplicity as its core. It has simplicity, but not easiness. It takes some time to understand and perhaps shift our approach from the non-functional world.\nJavalin seems to me like an easy way to dip your toes into this style and Vert.X offers more mature enterprise offering. Both are great and definitely onto something. Even Spring Boot is trying to make this reactive/functional model somewhat viable.\nIf you want to see how writing a Simple REST service looks like in Vert.X, there is a good example available on GitHub.\nI see the Vert.X approach to simplicity as:\nFramework built completely around the reactive/functional model Providing a list of features that can compete with more traditional enterprise offerings Sacrificing easiness of getting into for a simplicity at its core Splitting the framework into numerous composable pieces If you are not afraid of some initial difficulty in order to work with something simple at its core, Vert.X is an interesting option!\nSummary There are many microservices frameworks and approaches available. More than I can review here. Each of them strives to make development simple and easy. There are trade-offs between these approaches and different trade-offs will appeal to different audiences.\nI hope this article gave you a different way of looking at frameworks and development approaches and perhaps motivated you to try something that is difficult, but simple!\n","permalink":"https://e4developer.com/posts/the-quest-for-simplicity-in-java-microservices/","summary":"\u003cp\u003eThere is great value in simplicity. When things are simple, they are easier to understand, easier to extend and easier to modify. They are better. Simplicity is the ultimate compliment you can give to an architecture or a framework. In this article, I look at how four different frameworks- \u003ca href=\"https://spring.io/projects/spring-boot\"\u003eSpring Boot\u003c/a\u003e, \u003ca href=\"https://javalin.io/\"\u003eJavalin\u003c/a\u003e, \u003ca href=\"https://vertx.io/\"\u003eVert.x\u003c/a\u003e and \u003ca href=\"http://micronaut.io/\"\u003eMicronaut\u003c/a\u003e; approach this quest for simplicity.\u003c/p\u003e\n\u003ch2 id=\"simple-does-not-mean-easy\"\u003eSimple does not mean easy\u003c/h2\u003e\n\u003cp\u003eOne of my inspirations for this article was a great presentation by Rich Hickey titles \u003ca href=\"https://www.infoq.com/presentations/Simple-Made-Easy\"\u003eSimple Made Easy\u003c/a\u003e.\u003c/p\u003e","title":"The Quest for Simplicity in Java Microservices"},{"content":"Are you really building microservices? What are microservices? There seems to be a constant disagreement on what constitutes microservices systems and what simply makes a “distributed monolith”. In this article, I will go back to basics and look at what’s at the core of what microservices really are.\nBefore giving my own opinion on the microservices definition and helping to answer you the question “are we really building microservices?” let’s quickly review what the greats have to say on that topic.\nMartin Fowler and James Lewis on Microservices Despite Martin Fowler’s fame, he is not the only one who coined these definitions. James Lewis is credited for a lot of this work… He even puts an interesting tagline on his twitter page:\n(…) blame me for Microservices\nJames Lewis\nJokes aside, let’s see what these pioneers have to say on the definition of microservices.\nThere is the seminal article titled Microservices where Martin Fowler and James Lews dissect the term. I strongly recommend you read it if you have not before. Rather than giving a simple definition, the authors mention the following list of characteristics that microservices exhibit:\nMicroservices Architecture Characteristics\nThe primary way of a microservices system componentization is via services Services are organized around business capabilities They are used for building products more often than delivering projects. That indicates some DevOps ideas- teams that build microservices, usually run them as well. Preference of smart endpoints and dumb pipes Decentralized governance – a higher degree of technological freedom Decentralized data management – microservices owning their own data Automated infrastructure – another sign of a close marriage between DevOps and microservices Designed for failure – scalability from resiliency Exhibit evolutionary characteristics I am just reiterating what was said in the enlighting Microservicesarticle (really, you should read it). The point here is that these characteristics do not form a simple and rigid definition.\nWhat we can gather from here that Microservices Architecture is a style. In most cases, architects and developers decide to pursue the microservices approach and then end up somewhere on the spectrum.\nWhen going for Microservices Architecture you may achieve different levels of maturity. I have even attempted to create a Microservices Maturity Quiz that tries to highlight some useful technologies.\nSam Newman and “Principles of Microservices” Another person that greatly contributed to the popularity of microservices is Sam Newman. His book “Building Microservices” (I reviewed it here) is one of the most popular on the topic.\nRather than focus on the book, I would want to bring an absolutely phenomenal talk about Microservices delivered by Sam, titled “Principles of Microservices”. Wonders of the Internet and generosity of the NDC Conferences, you can watch the talk in its entirety here… And believe me, it is worth it!\nWhat do we learn about Microservices from Sam here? We actually have a working Microservices definition:\nSmall Autonomous services that work together, modeled around a business domain\nMicroservices Definition – Sam Newman\nThis is great! A concise definition that encompasses the key things about Microservices.\nIf this is what Microservices are, why is this talk over 50 minutes long? That’s because there is the definition, and then there are principles. Sam identifies these principles as:\nMicroservices Principles\nBeing Modelled Around Business Domain Culture of Automation Hiding Implementation Details Decentralizing all the things Independent Deployment Consumer First Isolating Failure Being Highly Observable Do you notice anything? The Microservices Principles here are very similar to the characteristics of microservices architecture as described by Martin Fowler and James Lewis!\nSo are you using microservices? The answer to this question is simple. *Are you attempting to build small and autonomous services that are modelled around business domains?*If you answer yes, then you are working with microservices.\nYou would think that’s a great news… Yes, it is great in some ways, but that also means that you are attempting something really difficult. Make sure that you review the Principles of Microservices as presented by Sam Newman and avoid Common Microservices Tech Debt.\nPeople will always argue about what makes a Microservices Architecture. I would advise everyone to focus on what makes a good Microservcies Architecture. You can start with principles, but since they were only formulated in 2015, there is still a lot of room for change in the future.\nConclusion I hope this article gives you a clear idea of how you can define microservices:\nSmall Autonomous services that work together, modelled around a business domain\nMicroservices Definition – Sam Newman\nMore importantly, I hope that with reading this, you will agree with me that understanding what makes good microservices architecture is more important!\n","permalink":"https://e4developer.com/posts/microservices-definition/","summary":"\u003cp\u003eAre you really building microservices? What are microservices? There seems to be a constant disagreement on what constitutes microservices systems and what simply makes a “distributed monolith”. In this article, I will go back to basics and look at what’s at the core of what microservices really are.\u003c/p\u003e\n\u003cp\u003eBefore giving my own opinion on the microservices definition and helping to answer you the question \u003cem\u003e“are we really building microservices?”\u003c/em\u003e let’s quickly review what \u003cem\u003ethe greats\u003c/em\u003e have to say on that topic.\u003c/p\u003e","title":"Microservices Definition"},{"content":"Technical Architect is a job that many people understand differently. Some people argue Technical Architects should only design systems, staying hands-off, while others would see them as being deeply involved in the development of systems. In here I will explore the role of Technical Architects, the job itself and the value they bring.\nThe Role of a Technical Architect All systems have technical architectures. Sometimes this architecture is decided and designed, other times- it happens by chance.\nOn trivial projects, it may be perfectly ok for the architecture to just “happen”. If it takes two weeks to deliver the whole project, maybe it is ok to just roughly agree what needs building and that’s that!\nThe idea of using just enough architecture is clearly explained and reason in “Design It! From Programmer to Software Architect” – a book by Michael Keeling. I very much agree with this sentiment.\nTo keep this article on the topic, from now on, I will discuss a situation when the project is of considerable complexity and the architecture is not an obvious choice.\nDiscovering technical architecture As already discussed, all systems have some technical architecture. The problem is that this architecture is often unknown to the team! One of the tasks of architects is discovering this architecture.\nBy discovering, I mean analysing and documenting it in diagrams, designs or other artifacts that would enable a meaningful discussion about design.\nThis discovery part can include uncovering assumptions and dependencies that not everyone may be aware of. Before you can decide how to move forward in the future, you should know where you are at the present!\nDesigning Software Systems I agree with Robert C. Martin (Uncle Bob), that designing and architectingare the same thing. Yes, the word architecture is often used when talking about the larger scale, but software has fractal-like nature. The closer we look the more layers of abstractions we see.\nHow do you decide which technical decisions are worthy of architects attention? These are the decisions, that have lasting effects. Things that are difficult to change in the future.\nOne of the proposed ways to evaluate good architecture is the ease of maintaining the system. The fewer developers you need to support and further develop new features, the better architecture you have. This is, of course, subject to the system being fit for purpose.\nWe are not going to discuss system design in more details here, as numerous books were written on the topic and it may take a lifetime to master!\nDevelopers acting as architects This section is titled The Role of a Technical Architect, as often, people who practice technical architecture have different job titles. This is perfectly ok! The best teams have multiple people who can perform technical architecture. It is absolutely crucial for developers to be aware of architecture and have some design skills.\nWhile it is great to have multiple developers designing the system, there is often a need for a dedicated technical architect…\nThe Job of a Technical Architect When working on complex, enterprise-level systems, you often come across dedicated technical architects. The importance of the role and the value it brings is so high, that more and more you can see technical architects working on small to medium size systems as well. Why is dedicated role useful, when developers can create designs as well?\nGuardians of the architecture Technical Architects are responsible for the technical architecture. It sounds trivial, but it is a key part of the job. There is a benefit in having a dedicated person make sure that the architecture that is being created is a good one.\nThe architecture will be created regardless if anyone is paying any attention. Technical architects should make key decisions, influence when necessary and stop any major architectural debt being created.\nI will quote Robert C. Martin here:\n“The only way to go fast, is to go well.”\nWhere technology meets business Interfacing with business stakeholders and understanding a big picture is crucial to successful software delivery. In the ideal world, the whole team would be involved. Everyone would understand the big picture and the business team was always easy to interact with.\nThe reality of delivering complex software projects is often very different. The business challenges may be far more intricate for every single developer to have a deep understanding and business stakeholders may be spread across the globe. In that case, you need someone to connect the dots and represent the technical team.\nWhen the Technical Architect role starts to be more focused on solving business challenges, it is often called Solution Architect. I don’t think there is a general agreement on the exact distinction between the two, but it is worth to be aware of it.\nA particular set of skills When technical architecture is a large part of the project, not only it takes more effort to discover it and design it, it also gets more difficult. While many developers make good architects, there is value in experience.\nTechnical Architects who design and document systems constantly often produce high-quality diagrams, documentation and make good design decisions faster. A good technical architect is an amazing asset to the project that can make everyone life easier.\nThe Value of a Technical Architect Technical Architects are often well paid. A quick look at salary data shows that architects on average get paid 30% more than developers in London. Is that justified? One argument is that architects are often more experienced, hence the increased compensation.\nI would like to make a value argument here as well…\nDelivering value that scales with the system The value that most software developers deliver is based on the software they write. Fair enough. Often, this is an incredibly high value and good software developers are paid very well.\nThe value that Lead Developers / Technical Leads bring usually scale with the team. A good team with a good Lead Developer can become even better. That would make a good argument for Lead Developers to be worth a bit more than competent developers who don’t lead teams.\nThe value of Technical Architect scales with the system. The bigger, more expensive system, the bigger value can be derived from good architecture.\nImagine a system that costs £5,000,000 to deliver (not that uncommon in the enterprise). Let’s modestly argue that a good architecture could make the delivery 20% quicker… The system suddenly only costs £1,000,000 less! I know I am grossly oversimplifying, but at the same time, it is not far off the reality.\nGood architecture can make or break software projects. Software projects can often make or break entire companies!\nTo make sure I am understood well- good technical architecture is also a result of a good development team and many other factors. Good technical architecture can make or break projects. Technical Architects have the ultimate responsibility for one of the most value impacting factors of the project. Technical Architects are positioned to create or save an immense value.\nSummary Technical Architects can be immensely valuable. Technical architecture is not only created by architects, however, architects bear responsibility for this architecture.\nIf you would like to learn more about technical architecture I wholeheartedly recommend:\n“Clean Architecture” – by Robert C. Martin “Design It! From Programmer to Software Architect” – by Michael Keeling ","permalink":"https://e4developer.com/posts/technical-architects-the-role-the-job-and-the-value/","summary":"\u003cp\u003eTechnical Architect is a job that many people understand differently. Some people argue Technical Architects should only design systems, staying hands-off, while others would see them as being deeply involved in the development of systems. In here I will explore the role of Technical Architects, the job itself and the value they bring.\u003c/p\u003e\n\u003ch2 id=\"the-role-of-a-technical-architect\"\u003eThe Role of a Technical Architect\u003c/h2\u003e\n\u003cp\u003eAll systems have technical architectures. Sometimes this architecture is decided and designed, other times- it happens by chance.\u003c/p\u003e","title":"Technical Architects - the role, the job and the value"},{"content":"It is hard to believe (mostly for myself) that I have already written 60 articles on this blog.\nWith hundreds of people reading this blog daily, I would like to provide more varied and valuable content. I started by adding a Start Here page to help visitors navigate. See what other ideas I have on my mind and help me chose.\nAdding Start Here is just the beginning of adding more varied content to the blog. I am looking for other ideas for making this an interesting place to learn about microservices and have a good time doing so.\nI am thinking along the following lines:\nTop Articles as you can currently see in the widgets (or under the article, when browsing on mobile). Guides and Tutorials that would form a logical continuation. I have written a few blog posts (for example on Spring Cloud Data Flow) that form a connected logical whole. More organised and focused Book Reviewing Section – I really enjoy reading and listening to books, a dedicated top page could be useful. YouTube has a wealth of High-Quality Technical Talks. It would be good to aggregate, categorize and highlight the best content. Example/Reference implementations of patterns and ideas. You can think of it as Microservices Examples Catalogue. Different Quizzes for either knowledge or comparing yourself with other developers Any other thing that may peek the interest of fellow microservices and JVM enthusiasts What content interests you? I would be very grateful if you could let me know in the comments or on Twitter!\n","permalink":"https://e4developer.com/posts/e4developer-is-growing-start-here-and-more-to-come/","summary":"\u003cp\u003eIt is hard to believe (mostly for myself) that I have already written 60 articles on this blog.\u003c/p\u003e\n\u003cp\u003eWith hundreds of people reading this blog daily, I would like to provide more varied and valuable content. I started by adding a \u003ca href=\"https://www.e4developer.com/start-here/\"\u003eStart Here\u003c/a\u003e page to help visitors navigate. See what other ideas I have on my mind and help me chose.\u003c/p\u003e\n\u003cp\u003eAdding \u003ca href=\"https://www.e4developer.com/start-here/\"\u003eStart Here\u003c/a\u003e is just the beginning of adding more varied content to the blog. I am looking for other ideas for making this an interesting place to learn about microservices and have a good time doing so.\u003c/p\u003e","title":"E4developer is growing - “Start Here” and more to come"},{"content":"Three years ago I was looking for a new job. I decided that I will pick up a couple of books to help me revised for the interviews. One of those books was*“Cracking the Coding Interview”*by Gayle Laakmann Mcdowell. I expected a book that will help me revise for the interviews, but I got a lot more from the book!\nProgramming interview questions and answers The main idea behind this book is to give you a list of programming questions that may come up during the interview and teach you how to solve them.\nThe problem and solutions in the book are grouped intelligently. We have the three broad categories of questions: Data Structures, Concepts and Algorithms and Knowledge Based Questions.\nGoing through this numerous topics, not only I ended up prepared much better for the interview, but I also greatly improved my general knowledge of a large number of concepts and ideas. Isn’t that the point? Get better at interviewing by understanding the material much better? No tricks here, this is just knowledge!\nTo really give justice to the really impressive coverage of different topics, here is the chapter listing for the first two categories that I mentioned:\nData Structures:\nArrays and Strings (including HashTables etc.) Linked List Stacks and Queues Trees and Graphs Concepts and Algorithms:\nBit Manipulation Math and Logic Puzzles Object-Oriented Design Recursion and Dynamic Programming System Design and Scalability Sorting and Searching Testing Each of these chapters contains numerous sub-Chapters. There is enough knowledge in this book to prepare you thoroughly for anything that is likely to come up in a technical interview.\nFor the interviewer If you are doing technical interviews this book also comes as a great help. It discusses the good interviewing technique and gives you a large number of questions that you can use in your interviews.\nIt is reassuring to know that your recruiting process is similar to that of Google, Microsoft or Apple. The author of the book – Gayle Laakmann Mcdowell interviewed extensively at Google and worked in these other companies. She knows what she is talking about!\nBeyond the technical questions This book would be worth its price just for the good coverage of the general software development topics. I really liked that it did not stop there and gave general advice on topics such as:\nPreparing for the interview – writing a good resume, making a plan Big O notation in more depth that I have seen tackled before General strategy on working with technical questions Dealing with behavioral questions Dealing with the offer and negotiating The book has about 80 pages dedicated to these topics, so the treatment goes into some depths. Preparing for an interview, this part of the book made for an exciting reading!\nAdvanced topics Last time I was looking for the job was 3 years ago. Despite that, I still occasionally use the book! I enjoy practicing competitive programming with HackerRank. Some of the more difficult questions require advanced algorithmic knowledge… Surprise, surprise- in the Additional Review Problems section, “Cracking the Coding Interview” has you covered here!\nThe book does not cover everything and if you really need an in-depth Algorithmic handbook, there is always Introduction to Algorithmsby MIT Press… The difference is that the Introductionis over 1200 pages long and not an easy read.\n*“Cracking the Coding Interview”*still covers pretty advanced topics, that are by no means trivial. I have learned about Rabin-Karp Substring Search and Red-Black Trees among other things from this book! It definitely improved my competitive programming and my scores on Hacker Rank (you can see them or follow me on HackerRank here).\nSummary Preparing for a technical interview is a hard work. There is no single book that you can simply buy and pass an interview. What you need to do is get the book and actually work through the examples, study! There are no cheap tricks here.\nThe goal of a technical interview book is to help you study. *“Cracking the Coding Interview”*helped me study and helped me get a job, I hope it will help you too!\n","permalink":"https://e4developer.com/posts/cracking-the-coding-interview-learn-that-and-much-more/","summary":"\u003cp\u003eThree years ago I was looking for a new job. I decided that I will pick up a couple of books to help me revised for the interviews. One of those books was*“Cracking the Coding Interview”*by Gayle Laakmann Mcdowell. I expected a book that will help me revise for the interviews, but I got a lot more from the book!\u003c/p\u003e\n\u003ch2 id=\"programming-interview-questions-and-answers\"\u003eProgramming interview questions and answers\u003c/h2\u003e\n\u003cp\u003eThe main idea behind this book is to give you a list of programming questions that may come up during the interview and teach you how to solve them.\u003c/p\u003e","title":"“Cracking the Coding Interview” - learn that and much more!"},{"content":"I have recently been very interested in microframeworks. One thing notably missing from that article is Spring in the context of a microframework. You may be surprised, but it is possible to write very lightweight APIs with Functional Spring. In this article, I will show you how, by turning CIA World Factbook into a REST API.\nSo what is Functional Spring? Functional Web Framework was introduced in Spring 5 and lets you build a very lightweight REST API without much of the Spring Magic. Sounds perfect!\nSetting up Spring Functional Web Framework To get started with using the Spring Functional Web Framework you can go with our favorite Spring Boot and a WebFlux dependency.\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-webflux\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; You don’t really need to use WebFlux, as you are not required to follow the Reactive style. On the other hand, since we are building a functional microframework it seems like a good idea.\nThere are multiple servers that you can choose here including:\nNetty Undertow Tomcat Jetty With examples provided in the official documentation. In this article, we will be using Netty.\nCIA World Factbook The CIA World Factbook is a source of free public domain information about multiple countries around the world. From the official site:\nThe World Factbook provides information on the history, people, government, economy, energy, geography, communications, transportation, military, and transnational issues for 267 world entities. Our Reference tab includes: maps of the major world regions, as well as Flags of the World, a Physical Map of the World, a Political Map of the World, a World Oceans map, and a Standard Time Zones of the World map.\nThere is a lot of fascinating information in there! Another great thing is that this information is available in JSON format in this public domain GitHub project. Data here is stored in JSON files in multiple directories.\nI want to build a small REST API for retrieving this data.\nBuilding a simple Netty based functional server The service that we are building here, will not make use of any Spring Boot autowiring magic. Instead, we will rely on explicitly declaring a HttpServer and HttpHandlers based on RouterFunctions.\npublic static void main(String[] args) throws InterruptedException, FileNotFoundException { HttpHandler httpHandler = RouterFunctions.toHttpHandler(createRouterFunction()); HttpServer .create(\u0026#34;localhost\u0026#34;, 8080) .newHandler(new ReactorHttpHandlerAdapter(httpHandler)) .block(); Thread.currentThread().join(); } This code is relatively easy to understand, although looks quite alien to many Spring developers. The HttpServer object is more similar to what we commonly find in microframeworks. Just compare it to a Javalin HelloWorld.\npublic class HelloWorld { public static void main(String[] args) { Javalin app = Javalin.start(7000); app.get(\u0026#34;/\u0026#34;, ctx -\u0026gt; ctx.result(\u0026#34;Hello World\u0026#34;)); } } It is also quite a bit more verbose. With the flexibility comes verbosity- it is often the trade that we end up making.\nI have encapsulated creation of the HttpHandler with the createRouterFunction(). To do that we will use the statically imported route.\nprivate static RouterFunction createRouterFunction() throws FileNotFoundException { RouterFunction\u0026lt;ServerResponse\u0026gt; routerFunction = route(GET(\u0026#34;/\u0026#34;), request -\u0026gt; createStringResponse(\u0026#34;Welcome to the CIA World Factbook. Check for directories with /directories\u0026#34;)); routerFunction = routeWithDirectories(routerFunction); return routerFunction; } Now when visiting localhost:8080 we will see “Welcome to the CIA World Factbook. Check for directories with /directories” printed out. I have hidden some complexity behind the helper function createStringResponse.\nprivate static Mono\u0026lt;ServerResponse\u0026gt; createStringResponse(String response){ return ServerResponse.ok().body(Mono.just(response), String.class); } Turning CIA World Factbook into a REST API With an understanding of how this simple Spring microframework works, we can use it to define the remaining endpoints of our API.\nI would like to have an option of listing all directories, listing country files in each directory and then requesting the specific files.\nI am not claiming that this is the simples way to build such an API. This use case makes for an interesting and a non-trivial example of dynamic routing that may give you an idea how to use it in more sophisticated examples. The code below also contains some Spring specific treatment of Resource files that you may find interesting.\nprivate static RouterFunction\u0026lt;ServerResponse\u0026gt; routeWithDirectories(RouterFunction\u0026lt;ServerResponse\u0026gt; routerFunction) throws FileNotFoundException { File file = ResourceUtils.getFile(\u0026#34;classpath:factbook\u0026#34;); String[] directories = file.list(); routerFunction = routerFunction.andRoute(GET(\u0026#34;/directories\u0026#34;), request -\u0026gt; createStringResponse(String.join(\u0026#34;,\u0026#34;, directories))); for(String directory : directories){ File countriesDir = ResourceUtils.getFile(\u0026#34;classpath:factbook/\u0026#34;+directory); String[] countries = countriesDir.list(); routerFunction = routerFunction.andRoute(GET(\u0026#34;/\u0026#34;+directory), request -\u0026gt; createStringResponse(String.join(\u0026#34;,\u0026#34;, countries))); for(String country : countries){ File countryFile = ResourceUtils.getFile(\u0026#34;classpath:factbook/\u0026#34;+directory+\u0026#34;/\u0026#34;+country); Scanner scanner = new Scanner(countryFile, \u0026#34;UTF-8\u0026#34; ); String fileText = scanner.useDelimiter(\u0026#34;\\\\A\u0026#34;).next(); scanner.close(); routerFunction = routerFunction.andRoute(GET(\u0026#34;/\u0026#34;+directory+\u0026#34;/\u0026#34;+country), request -\u0026gt; createStringResponse(fileText)); } } return routerFunction; } This was a lot of hard work, so let’s look at our newly created endpoints!\nWhen visiting localhost:8080:\nWhen visiting localhost:8080/directories:\nWhen visiting localhost:8080/europe:\nWhen visiting localhost:8080/europe/pl.json:\nSummary Spring Functional Web Framework is an interesting tool that can be used creatively for building simple and efficient services. The CIA Factbook service that I have presented here takes less than 0.5 seconds to start on my machine. This is a lightness and swiftness we are not that used to with Spring.\nThe API is quite a low level and it requires some basic understanding of how the server works. On the other hand, it enables us to get closer to the actual server, perhaps squeezing more performance out of it and making use of more server specific options.\nCIA World Factbook is a fascinating resource that can be used for many creative purposes. I would like to see how a similar API could be expanded with GraphQL to make a genuinely useful service.\nThis blog post was partially inspired by http://blog.alexnesterov.com/post/spring-your-next-microframework/ and https://spring.io/blog/2016/09/22/new-in-spring-5-functional-web-framework when looking what Spring has to offer in the microframeworks space.\nThe code for this article is available on GitHub. Let me know in the comments if you have some interesting idea for use of either Spring Functional Web Framework or the CIA World Factbook.\n","permalink":"https://e4developer.com/posts/cia-world-factbook-api-with-functional-spring/","summary":"\u003cp\u003eI have recently been very interested in \u003ca href=\"https://e4developer.com/posts/the-rise-of-java-microframeworks/\"\u003emicroframeworks\u003c/a\u003e. One thing notably missing from that article is Spring in the context of a microframework. You may be surprised, but it is possible to write very lightweight APIs with Functional Spring. In this article, I will show you how, by turning \u003ca href=\"https://www.cia.gov/library/publications/the-world-factbook/\"\u003eCIA World Factbook\u003c/a\u003e into a REST API.\u003c/p\u003e\n\u003cp\u003eSo what is Functional Spring? \u003ca href=\"https://spring.io/blog/2016/09/22/new-in-spring-5-functional-web-framework\"\u003eFunctional Web Framework\u003c/a\u003e was introduced in Spring 5 and lets you build a very lightweight REST API without much of the \u003cem\u003eSpring Magic\u003c/em\u003e. Sounds perfect!\u003c/p\u003e","title":"CIA World Factbook API with Functional Spring"},{"content":"I have started studying Computer Science at University more than 10 years ago. Now, with Bachelor’s and Master’s degrees and a few years in the industry, I look back reflecting at the value of these degrees.\nAre Computer Science degrees worth the effort? Before going into details of what I wish I knew before going into the world of professional software development, I want to address the most commonly asked question, whenever the topic of degrees comes up:\nComputer Science degree – Bachelor’s or Master’s, from a good university is definitely worth the effort.\nThis is not to say that you can’t make it in this industry without the degree! You definitely can, but this is not the question.\nI see the major value in degrees coming from these main points:\nYou will have many more door open for yourself from the start. Some companies will not talk to you if you don’t have a degree unless you have some exceptional experience. You learn broad fundamentals of computer science (and often mathematics) that will serve you well. You get exposed to academic rigor when having your work judged- the value of that comes mostly with any degree (not necessarily a Computer Science one). Finishing your Master’s (or Bachelor’s) may be one of the first large projects you get to finish. It can teach you much more than just computer science if you stay open-minded. Many more that may differ per university and per individual. With that thought out of the way, let’s look at what most university courses don’t teach you.\nSoftware development practices One thing that is not taught nearly enough at universities is how software is actually made. I assume here and throughout the article, that one of the goals of Computer Science degree is to teach you how to make software.\nI don’t think you should necessarily learn details of Scrum, but on the other hand why not? You learn Java or some other languages of choice, why not learn a development framework? These things are way more important that you may assume after getting a degree (if it is anything like mine).\nMaybe this is something that is genuinely better taught with practice, and many universities do a “year in industry” kind of programs that cover this well. Even taking this to account, it feels that your standard university workshop and set of exercises will make you think that software development is not so heavily collaborative.\nMoving on from Scrum and Agile in general, it would be good if things such as Code Review or Source Code Control (perhaps with Git) were explored more.\nAre these topics too concerned with working the actual job of a software developer to be part of Computer Science degree? I think there is space for such topics. Let me know your opinion in the comments.\nThe importance of frameworks You spend three years using Java (or some other language of choice), acing all the classes and you think that you are ready for the real world… Then you join a company and you are hit with Enterprise Java, Spring, Grails or whatever else is popular these days!\nI certainly don’t advocate for universities to start teaching frameworks, but maybe framework design class would be a good idea? Many of us learned how to write a compiler, but hardly anyone learned how something like Spring gets written.\nEven if you are not going to be learning about frameworks, understanding their use and importance in modern software development is important. Many new graduates have the urge to write things from scratch and are shocked looking at dozens of frameworks being used in enterprise architectures.\nPerhaps the answer would be to teach how to learn these things? This is something that most developers naturally pick up in their careers. The more you know the easier it is to learn and understand how these things fit together.\nWhen it comes to the massive importance of backend and frontend frameworks and how universities should approach this subject- I have more questions than answers! Perhaps, that’s why it is not addressed very well at the moment.\nSoftware development career Another area where people may disagree with me. Some may say that universities should not be teaching about career… Why then do they provide industry placements, CV classes, and career workshops? It seems that people running computer science departments all around the world are understanding the importance of that!\nI think universities are doing what they should already, but many students underestimate the importance of these things.\nSome of the things worth knowing before entering the world of professional software development:\nDifferences between small and large companies Difference between startups and established organizations Backend or Frontend development? Should you specialize in one or both? How to negotiate a salary. How to be a good team member and how to provide feedback well. Should you learn many programming languages? Which ones? Does it matter? What are the other roles that you can do when it comes to software? Testing, Business Analysis, Design, Management? How do you get there? Many many more. Overall, there are tons of interesting questions that I did not even consider when attending a university.\nIf you are fresh out of university or have never considered these questions before- think about them, do some research. It may help you a lot!\nThe goals behind Computer Science degrees I don’t want people to misunderstand what I am saying here. I don’t criticize universities for not teaching everything possible about Computer Science and the job of Software Developers. I want to highlight things that you may be missing if you don’t have much experience beyond the degree itself.\nI think the important goals of Computer Science degrees these days should include:\nTeaching fundamental computer science Teaching how to program in some languages Getting students started at writing some software Getting students ready to pursue a further academic career if desired Getting students ready to pursue a software related career if desired Most good universities do well on all these points. Bachelor’s or Master’s degree only gets you started on your career- there is more to learn.\nConclusion Overall I really enjoyed the time I spent at the two different Universities I got my degrees from. I think it was a valuable and well spent time, where I learned a few languages, a lot about core Computer Science, Machine Learning, OOP, Functional Programming… and yet got surprised by the realities of writing software in the real world!\nClarification: I hold Bachelor’s degree in Mathematics with Computer Science and Master’s degree in Computer Science.\n","permalink":"https://e4developer.com/posts/computer-science-degree-the-missing-pieces/","summary":"\u003cp\u003eI have started studying Computer Science at University more than 10 years ago. Now, with Bachelor’s and Master’s degrees and a few years in the industry, I look back reflecting at the value of these degrees.\u003c/p\u003e\n\u003ch2 id=\"are-computer-science-degrees-worth-the-effort\"\u003eAre Computer Science degrees worth the effort?\u003c/h2\u003e\n\u003cp\u003eBefore going into details of what I wish I knew before going into the world of professional software development, I want to address the most commonly asked question, whenever the topic of degrees comes up:\u003c/p\u003e","title":"Computer Science Degree - The Missing Pieces"},{"content":"Should I learn Java? This is a question that just keeps coming up. If you are just starting out as a developer, if you already work as a Front End Developer or even if you are from the .NET background, many people wonder if learning Java is the right career/personal development move. Let’s see how useful learning Java is in 2018.\nWhen I first started using Java, around 2007, it had a very mixed reputation. On one hand, it was a reasonably new and modern language, but on the other, it was infamous for its bad performance (not fully deserved in my opinion) and verbosity (when contrasted with, back then, very popular Python).\nNow, more than 10 years later, the question becomes interesting for multiple reasons. Let me list the key concerns that I hear most often:\nJava is old and is going out of fashion. There are much better JVM languages like Scala, Clojure, and Kotlin. I am a Frontend Developer, isn’t NodeJS more practical? Java is unpleasant to work with. Java is too slow/consumes too much memory. Why should I learn Java over X, Y, Z instead? I am sure there may be more questions and concerns out there, so let me know in the comments. I may edit the article or answer you directly.\nLet’s look at these concerns and questions one by one!\nConcern 1: Java is old and is going out of fashion Java was released in 1995 (according to its Wikipedia page), so it may already be older than some of its users. Is that old? This is subjective, older than many languages that’s for sure! Is that a problem? Well, that’s ageism! Surely age alone is not an argument so let’s look at the other part of this statement.\nIs Java really going out of fashion? TIOBE Index tracks the popularity of programming languages. Here is the current top 20 as of 2018:\nNot only is Java the number one most popular language according to TIOBE, but it is actually gaining in popularity! Sure, there are other languages gaining popularity faster and moving up the list, but saying that Java is going out of fashion is just untrue.\nConcern 2: There are much better JVM languages like Scala, Clojure, and Kotlin This is an interesting point, especially with Kotlin rapidly gaining popularity. If you are new to JVM should you even bother with Java or should you go straight to (let’s say) Kotlin?\nI would argue that knowing Java is essential if you want to be a career developer on the JVM. Of course, you can learn any language in isolation, but you may be missing some context. Plenty of these languages rely on Java libraries and you will most likely not avoid at least reading Java.\nI actually consider it a major benefit of knowing Java- it gives you a foundation. JVM is such a rich platform with languages like Groovy, Scala, Clojure, Kotlin- nearly all of them having some inspiration or relationship with Java (beyond the JVM).\nI would encourage everyone to explore other languages on JVM- this is often where the innovation in Java is coming from. I would not hold it as a reason to avoid learning Java though! Learning Java will give you a headstart in any of these languages and it is really a worthy investment!\nConcern 3: I am a Frontend Developer, isn’t NodeJS more practical? This can be generalized to any Frontend Developers wondering if learning a serverside language like Java would be of use.\nNodeJS is extremely practical and popular. You can build services quickly and effectively. However, Java is more established on the server side and can be really easy to work with as well.\nThis question can be really only answered when looking at your personal situation. Would you prefer staying mainly Frontend Developer forever or would you ever want to go for a deeper dive on the server side? I would argue that it may be beneficial to at least learn how to read Java.\nThere is a lot of Java serverside code written out there already. Even if you are not planning on writing more yourself, you will limit yourself by not being able to understand the language.\nThis concern has some merit as if you already are working on NodeJS using JavaScript (or TypeScript) on both the client and the server- you would need a good reason to start using Java. Is it a worthy investment for the future? This is for you to answer.\nConcern 4: Java is unpleasant to work with Java Enterprise Edition became quite infamous for its use of XML for bean configuration… That stained Java reputation as a nasty language to write code in for years to come. This is no longer true.\nI have written about The Rise of Java Microframeworks recently. These days writing a Java service can be incredibly trivial. Let’s look at “Hello World” written in Spark Java:\nimport static spark.Spark.*; public class HelloWorld { public static void main(String[] args) { get(\u0026#34;/hello\u0026#34;, (req, res) -\u0026gt; \u0026#34;Hello World\u0026#34;); } } Is that really unpleasant? Quite the opposite I would say! Java is fun! With Spring Boot it even somehow became fun in the enterprise!\nAnother thing that Java enjoys is an incredible amount of high-quality tools, support and online material that makes solving most problems very simple.\nConcern 5: Java is too slow/consumes too much memory Java runs on JVM, so it used to be plagued with slower startup times. You will not win with C written program that does something comparable to a bash utility when you need to start JVM. You may struggle to win on speed with super small and super light, native applications. Is that the reason not to use Java? For those specific cases probably, yes.\nWhat can you use Java for then? Is it actually fast these days?\nJava is used heavily in the Big Data space for example with tools such as Apache Hadoop actually written in Java. The largest banks and financial enterprises in the world run Java to power their backends. Java is actually used in High-Frequency Trading applications when it can rival C++ in performance in some cases. Java is used on Android devices heavily. Java is big in the embedded space. Many more. If you want to write video games- Java also may not be the best choice for you. I think in reality this is more to do with the JVM availability than the “performance” worries that people have.\nWhy should I learn Java over X, Y, Z instead? Java is an amazing language. Being the most popular language in the world at the moment, it is one of the core skills for software development.\nYou don’t have to learn Java instead another language. For most people being a programmer (hobby or professional) is something that lasts more than a few months. Don’t limit yourself to learning only Java. Not learning Java will cut you out from the massive and dynamic community.\nJava is also evolving faster than ever with the release cycle changed to two major releases a year. This is exciting. It already brought us great things such as the use of var for type inference from Java 10 onwards*.*There is more to come.\nShould I Learn Java? Yes, you should learn Java. It is the most popular language in the world today for a reason. It is reasonably simple, modern, fast and it is evolving. There is an abundance of libraries helping you write amazing code and easy access to help and materials online.\nIf you were on the fence, I hope that you are not anymore- go learn Java!\n","permalink":"https://e4developer.com/posts/should-i-learn-java-in-2018/","summary":"\u003cp\u003eShould I learn Java? This is a question that just keeps coming up. If you are just starting out as a developer, if you already work as a Front End Developer or even if you are from the .NET background, many people wonder if learning Java is the right career/personal development move. Let’s see how useful learning Java is in 2018.\u003c/p\u003e\n\u003cp\u003eWhen I first started using Java, around 2007, it had a very mixed reputation. On one hand, it was a reasonably new and modern language, but on the other, it was infamous for its bad performance (not fully deserved in my opinion) and verbosity (when contrasted with, back then, very popular Python).\u003c/p\u003e","title":"Should I Learn Java in 2018"},{"content":"Thank you for reading my newsletter. In the last month, I wrote on Java, Microservices and creating amazing teams- as expected! I also wrote my first funny post – How to write horrible Java that was warmly received. Check it out!\nI continue my fascination with Java Microframeworks ecosystem in The Rise of Java Microframeworks. If you have not heard about them before, definitely give it a read.\nI have once again included a few articles that I found interesting, but don’t come from E4developer website.\nReflecting on the current popularity and direction for Spring Framework:\nThe State of Spring in 2018 – Devoxx UK Impressions\nAdvice on how to write good Java code:\nEffective Java Microservices require Effective Java\nMuch more popular advice on how to write horrible Java code:\nHow to write horrible Java\nThe psychology and biology of creating amazing teams\nSecrets to Highly Motivated and Happy Software Teams\nMiniseries on Spring and Kafka for developers:\nHow to easily run Kafka with Docker for Development\nGetting Started with Kafka in Spring Boot\nMicroservices adoption advice:\nAdopting Microservices – Pragmatic Advice\nAnother, practical, look at Spring and Reactive Programming\nWebFlux in practice – asynchronous service with WebClient\nIncredibly popular tour of the current Java Microframeworks ecosystem\nThe rise of Java Microframeworks\nBeyond E4developer\nIntroduction to Micronaut – by Adrian Marszalek\nCreating REST Microservices with Javalin – by Christopher Franklin\nSpring Boot Profiles: A Strategic Way to Configure Applications – by Greg Rice\n","permalink":"https://e4developer.com/posts/e4developer-newsletter-june-2018-number-4/","summary":"\u003cp\u003eThank you for reading my newsletter. In the last month, I wrote on Java, Microservices and creating amazing teams- as expected! I also wrote my first funny post – \u003ca href=\"https://e4developer.com/posts/how-to-write-horrible-java/\"\u003eHow to write horrible Java\u003c/a\u003e that was warmly received. Check it out!\u003c/p\u003e\n\u003cp\u003eI continue my fascination with Java Microframeworks ecosystem in \u003ca href=\"https://e4developer.com/posts/the-rise-of-java-microframeworks/\"\u003eThe Rise of Java Microframeworks\u003c/a\u003e. If you have not heard about them before, definitely give it a read.\u003c/p\u003e\n\u003cp\u003eI have once again included a few articles that I found interesting, but don’t come from E4developer website.\u003c/p\u003e","title":"E4developer Newsletter – June 2018 – Number 4"},{"content":"Who wouldn’t want to work as a part of a happy and motivated software team? Wouldn’t it be great if every team you ever worked with was like that? Based on Leaders Eat Last by Simon Sinek and Drive by Daniel Pink, I will tell you how to make this possible.\nWhat motivates developers? There are different ideas out there about how to motivate people. The two most common ones are- bonuses (or other financial rewards) and fun work (challenging, interesting etc.). While there is some truth to both, they do not give the full picture.\nThe book that really opened my eyes to the topic of motivation is Drive by Daniel Pink. The book tackles the topics of financial motivation and provides the three key components of motivation- autonomy, mastery, and purpose.\nMotivating teams with money Everyone like money and we all wouldn’t mind being paid more. Can we use bonuses to motivate teams? Only short term!\n”Drive” presents numerous studies and argues persuasively, that bonuses do not make people more motivated. They very quickly become expectations. If you paid a bonus one year, not paying it next year will really demotivate people. Paying the same level of bonuses- that will be taken for granted. What can you do then? Pay good base salary and don’t worry about bonuses (at least that’s what “Drive” and numerous researchers say).\nWhat can you do then? What can you give your team if it’s not bonuses that will keep them motivated? Well, I’m glad you asked!\nTeam’s Autonomy It comes as no surprise that the highest performing teams are often self-managed teams!\nAs individuals, we crave autonomy. We want to have direction, but we want to be free to choose the course we want to take in achieving something.\nThat individual yearning for autonomy also translates to teams. Give the team direction, feedback, support, but do not take away their autonomy!\nTeam’s Mastery The second key piece to the puzzle of motivation is mastery. What is meant by that? An urge to improve, to get better at something that matters.\nBeyond the individual improvements, the team needs to have a way to improve the overall process. This is also why measuring progress and receiving feedback is so important.\nAnother way to look at it is linked with autonomy. Perhaps giving the team and it’s members some time for improvement could help? Ideas such as 20% time spend on technical debt and 10% spent on team members own initiatives come to mind.\nTeam’s purpose The last concept that is key to the team’s motivation- purpose. The work that is being done needs to have some purpose.\nWhat can this purpose be? Of course you can think of a noble cause, helping those in need etc… in reality- most teams simply do not work on these kinds of initiatives. You can have a much more mundane scenario and still fill the team with a purpose.\nThere is a lot of insight and motivation from simply seeing users interact with the software that the team is building- yet, the development teams are often kept away from the end users. If this is not possible, then sharing metrics, feedback, providing context can help.\nYou want to show the team that the work they are doing is making a difference. You want them to know that what they work on has a purpose.\nWhat about happiness? Motivation is important and it is difficult to find a happy team that is not motivated. I see it as a sort of a prerequisite. In order to achieve happiness, the team has to work in harmony with human nature…\nOk, but how do we reason about this human nature? “Leaders eat last” answers this by looking at the different chemicals that govern human happiness. These are Dopamine, Serotonin, Endorphins, and Oxytocin… Don’t worry this won’t turn into a biology lesson!\nDopamine – hitting your (and team’s) goals Dopamine is all about achieving goals. This is what makes us happy when we see yet another user sign up for the service, or a few extra story points completed in the sprint.\nDopamine is often thought of as a “selfish chemical”, as these are our own goals that cause it to be released. The trick here is to align our goals with the team goals.\nNothing wrong with a bit of healthy competition and feeling of achievement in the team.\nOxytocin – building a community Trust, friendship, community. These are the things that are linked to Oxytocin. We feel good when we help others, we feel good when others help us… we even feel good just witnessing an act of kindness!\nOxytocin is a biological reason why teams have superpowers and why you can’t create a real teamovernight from strangers. It takes times for these bonds of community and friendship to develop.\nThe existence of a biological factor shows that team lunches, going to pubs and spending time together has a serious non-superficial impact on team’s performance.\nTurns out that when we help each other, the whole team benefits!\nSerotonin – each team member is important This one is harder to explain with a catchy phrase. Serotonin is released when we feel pride, we feel that our work is valued and important. It is also related to having good leaders.\nIt is important to build community and help, but it is equally important to recognize individual contributions. We should treat all team members with the respect that they deserve.\nGetting the whole team’s achievements recognized (especially when this is done publicly) is a sure way to increase everyone’s levels of serotonin (and happiness).\nEndorphins – no pain no gain The last of the happy chemicals is a chemical group called endorphins. These are released as a result of overcoming pain and stress. I certainly do not recommend stressing and hurting everyone as a way of making them happy!\nThe existence of endorphins shows that we have an inbuilt system for dealing with things such as physicals exhaustion (no surprise). Thanks to endorphins we feel ecstatic at the end of a challenge.\nHave you ever stayed late with your friends working on a problem, emerging successful and celebrating at the end? Things like that can be sources of happiness as well- as long as they are not the norm!\nHealthy balance To sum it up, team’s happiness can be achieved by staying true to our human nature:\nWorking on shared goals (and hitting them) Creating a community where we help each other Recognizing everyone’s importance and having good leaders Overcoming challenges Simple- isn’t it?\nSummary Creating motivated and happy teams is not easy, but it is possible. Reflecting on what motivates us and what makes us happy, can guide you in doing the right things. I think if more people were aware of these things, we would have many more happy and motivated teams!\n","permalink":"https://e4developer.com/posts/secrets-to-highly-motivated-and-happy-software-teams/","summary":"\u003cp\u003eWho wouldn’t want to work as a part of a happy and motivated software team? Wouldn’t it be great if every team you ever worked with was like that? Based on Leaders Eat Last by Simon Sinek and Drive by Daniel Pink, I will tell you how to make this possible.\u003c/p\u003e\n\u003ch2 id=\"what-motivates-developers\"\u003eWhat motivates developers?\u003c/h2\u003e\n\u003cp\u003eThere are different ideas out there about how to motivate people. The two most common ones are- bonuses (or other financial rewards) and fun work (challenging, interesting etc.). While there is some truth to both, they do not give the full picture.\u003c/p\u003e","title":"Secrets to Highly Motivated and Happy Software Teams"},{"content":"Together with the growing popularity of microservices and light-weight REST API, we are witnessing another trend in Java: the rise of Java Microframeworks. Javalin, Micronaut, Spark and many more make building REST APIs a breeze. In this article, I look at this exciting space and share my opinions on their use.\nWhat is a microframework? Microframework is a minimalistic web application framework. What usually differs it from more traditional, large applications framework is:\nFocus on simplicity and speed of development Usually much smaller codebase Lack of some advanced features, such as templating engines, advanced security features etc. It is not a scientific definition and some frameworks (Vert.x for example) sit at the boundary of the two- on one hand, it is lightweight and much smaller than let’s say Spring, but on the other, it is pretty well featured and non-trivial.\nIt is worth adding that Java did not invent microframeworks. One of the earlier examples is Sinatra from Ruby (2007) which inspired quite a few Java microframeworks. I am sure some of the readers will be familiar with even earlier examples- if you are, let me know if the comments!\nWhy are microframeworks growing in popularity? First of all- microframeworks are not yet mainstream. That may soon change especially with the rapid growth of interest in the Serverless Architectures. Serverless really benefits from small and lightweight deployments- if you want to use Java in that context, microframeworks seem like a good choice.\nAnother big driver for their popularity is the increasing adoption of containers (Docker), containers management systems (Kubernetes) and patterns such as API Gateway. Suddenly, the services do not need to deal with as many problems as they used to.\nAll that would not matter much if the microframeworks themselves were not easy to work with. The new projects are amazing. I am a huge proponent of Spring Boot in the enterprise, but I can’t deny the elegance of Javalin. Unbelievable what today’s microframework creators can accomplish in just a few thousands lines of code!\nTour of microframeworks Enough talking, let’s look at some of my favorite projects and see how easy they are to work with.\nJavalin A simple web framework for Java and Kotlin\nThis was my first encounter with a “modern” Java microframework. Javalin is written in Kotlin and has support for both Java and Kotlin. If you want to write a nice REST API, it is a pleasure with Javalin.\nJavalin is being actively developed with new versions released every few weeks.\nJavalin Hello World:\nimport io.javalin.Javalin; public class HelloWorld { public static void main(String[] args) { Javalin app = Javalin.start(7000); app.get(\u0026#34;/\u0026#34;, ctx -\u0026gt; ctx.result(\u0026#34;Hello World\u0026#34;)); } } Javalin Offical Website\nSpark Java Spark – A micro framework for creating web applications in Kotlin and Java 8 with minimal effort\nOne of the earlier Java take on microframeworks dating back to 2011. Spark is very small, focused and probably the most commonly used from the frameworks mentioned here.\nSpark is being actively developed with bug fixes and maintenance released every few months. New features are added less frequently.\nSpark Hello World:\nimport static spark.Spark.*; public class HelloWorld { public static void main(String[] args) { get(\u0026#34;/hello\u0026#34;, (req, res) -\u0026gt; \u0026#34;Hello World\u0026#34;); } } Spark Official Website\nMicronaut A modern, JVM-based, full-stack framework for building modular, easily testable microservice applications.\nWith Micronaut, we are getting quite close to the barrier what is considered a microframework and what is not. The framework is very simple, but it packs a bit more than most of the competition. I think of it as a somewhat extremely slimmed down version of Spring Boot.\nWhat is great about Micronaut is their focus on the cloud. Working on AWS and making it easy to write serverless applications is high on their priority list.\nThe first milestone of 1.0.0 version was only released on May 30th, 2018 so we are in the very early days here. I think Micronaut has a serious chance of being the next big thing, so keep an eye on this one!\nMicronaut Hello World:\nimport io.micronaut.runtime.Micronaut; public class Application { public static void main(String[] args) { Micronaut.run(Application.class); } } import io.micronaut.http.annotation.Controller; import io.micronaut.http.annotation.Get; @Controller(\u0026#34;/hello\u0026#34;) public class HelloController { @Get(\u0026#34;/\u0026#34;) public String index() { return \u0026#34;Hello World\u0026#34;; } } Micronaut Official Website\nKtor Easy to use, fun and asynchronous.\nNot quite a Java, but rather a Kotlin microframework. Ktor is sponsored and developed by JetBrains- creators of Kotlin and strives at making development easy and fun. I did not have a chance to test it yet but based on the popularity among Kotlin enthusiasts and the JetBrains support, it is worth mentioning it here.\nKtor did not yet release 1.0.0 version, but it should be sometime this year.\nKtor Hello World:\nimport io.ktor.application.* import io.ktor.http.* import io.ktor.response.* import io.ktor.routing.* import io.ktor.server.engine.* import io.ktor.server.netty.* fun main(args: Array\u0026lt;String\u0026gt;) { val server = embeddedServer(Netty, port = 8080) { routing { get(\u0026#34;/\u0026#34;) { call.respondText(\u0026#34;Hello World!\u0026#34;, ContentType.Text.Plain) } get(\u0026#34;/demo\u0026#34;) { call.respondText(\u0026#34;HELLO WORLD!\u0026#34;) } } } server.start(wait = true) } Ktor Official Website\nOther notable microframeworks It is very difficult to give an overview of every Java microframework out there. Here is the list of the ones that I did not explore further, but can still be investigated and considered:\nRatpack – Ratpack is a set of Java libraries for building scalable HTTP applications. It is a lean and powerful foundation, not an all-encompassing framework. Jooby – Scalable, fast and modular micro web framework for Java. Akka HTTP – The Akka HTTP modules implement a full server- and client-side HTTP stack on top of akka-actor and akka-stream. It’s not a web-framework but rather a more general toolkit for providing and consuming HTTP-based services. Dropwizard – Dropwizard is a Java framework for developing ops-friendly, high-performance, RESTful web services. Jodd – Jodd is set of micro-frameworks and developer-friendly tools and utilities. Simple code. Small size. Good performances. Whatever. Use what you like. Armeria – Armeria is an open-source asynchronous HTTP/2 RPC/REST client/server library built on top of Java 8, Netty, Thrift and gRPC. Ninja – Ninja is a full stack web framework for Java. Rock solid, fast, and super productive. Pippo – It’s an open source (Apache License) micro web framework in Java, with minimal dependencies and a quick learning curve. Rapidoid – Rapidoid is an extremely fast HTTP server and modern Java web framework / application container, with a strong focus on high productivity and high performance. Out of that list, I would recommend checking out Ratpack, Jooby, and Dropwizard. Especially the first two frameworks quite closely follow the microframework philosophy.\nI need more than a microframework! If you need something light, but fully featured I can recommend two main options:\nSpring Boot – Spring Boot makes it easy to create stand-alone, production-grade Spring based Applications that you can “just run”.\nVert.x – Eclipse Vert.x is a tool-kit for building reactive applications on the JVM.\nSpring Boot is definitely not micro with all the dependencies that it brings, but the development experience can be quite similar if you are careful with what you chose to use.\nSummary Working with microframeworks is fun and productive. Sometimes it is too easy to always chose Spring Boot and forget that there is a whole world of Java and Kotlin innovation happening out there. I am particularly excited for Micronaut and Javalin and the way they may influence future JVM development. The ultimate cloud support and ultimate simplicity really appeal to me.\nIf I missed any of your favorite frameworks (or did not give them justice in my comments), be sure to let me know in the comments section!\n","permalink":"https://e4developer.com/posts/the-rise-of-java-microframeworks/","summary":"\u003cp\u003eTogether with the growing popularity of microservices and light-weight REST API, we are witnessing another trend in Java: the rise of Java Microframeworks. Javalin, Micronaut, Spark and many more make building REST APIs a breeze. In this article, I look at this exciting space and share my opinions on their use.\u003c/p\u003e\n\u003ch2 id=\"what-is-a-microframework\"\u003eWhat is a microframework?\u003c/h2\u003e\n\u003cp\u003eMicroframework is a minimalistic web application framework. What usually differs it from more traditional, large applications framework is:\u003c/p\u003e","title":"The rise of Java Microframeworks"},{"content":"Building reactive microservices with WebFlux is fun and easy. In this article, I will show you how to build a reactive “synonyms” service. Making asynchronous API calls with WebClient is likely the most common scenario for a real-life reactive microservice.\nSynonyms service – the idea I want to build a service that will return a synonym for a given word. Based on that I would like this service to translate a sentence into another one made completely out of synonyms. For example, I will have: Java is a good languagebecome Coffee is a right speech. It is somewhat entertaining and a nice example!\nThis particular scenario is well suited for the reactive approach as I will end up making an API call for every single word in that sentence. I will use https://www.datamuse.com/api/ that is a free, word-based API.\nWith multiple API calls, that do not block each other, I hope to achieve better efficiency and cleaner code by using WebFlux.\nDependencies I will be using WebFlux and 2.0.2.RELEASE release of Spring Boot. In order to get WebFlux you just need to add the following dependency to the pom:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-webflux\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; Make sure not to include the spring-boot-starter-web as this will clash with WebFlux. If this is all completely new to you I recommend reading first Getting Reactive with Spring Boot 2.0 and Reactor.\nThese are all the dependencies that you need. Pretty simple right?\nBuilding the Controller The Controller will be pretty standard. I will have two endpoints- one dedicated to getting a single word synonyms only, and another for sentences.\n@RestController @RequestMapping(\u0026#34;/synonyms\u0026#34;) public class SynonymsController { @Autowired private SynonymsService synonymsService; @PostMapping(path = \u0026#34;/word\u0026#34;) public Mono\u0026lt;String\u0026gt; wordSynonym(@RequestBody String word) { return synonymsService.getSynonym(word); } @PostMapping(path = \u0026#34;/sentence\u0026#34;) public Mono\u0026lt;String\u0026gt; sentenceSynonym(@RequestBody String sentence) { return synonymsService.getSynonymSentence(sentence); } } You can see the autowired *SynonymsService.*This is where the actual logic happens.\nBuilding the Synonyms Service First I define the WebClient to connect to the API.\nprivate WebClient client = WebClient.create(\u0026#34;https://api.datamuse.com/\u0026#34;); Based on that I can build a method that retrieves a single word synonym.\npublic Mono\u0026lt;String\u0026gt; getSynonym(String word) { Mono\u0026lt;SynonymResult[]\u0026gt; synonymResultsMono = client.get() .uri(\u0026#34;words?rel_syn=\u0026#34;+word) .retrieve() .bodyToMono(SynonymResult[].class); return synonymResultsMono .map(synonymResultList -\u0026gt; getBestSynonym(synonymResultList, word)); } I have created a SynonymResult class to make processing results simpler. Jackson does the conversion automatically here.\npublic class SynonymResult { private String word; private int score; public String getWord() { return word; } public int getScore() { return score; } } Because there are multiple candidate synonyms returned by the API, I will only choose those that are also single-words, do not contain the original word and have a high score associated.\npublic String getBestSynonym(SynonymResult[] synonymResultList, String word){ int topScore = 0; String topWord = word; for(SynonymResult result : synonymResultList){ if(result.getScore() \u0026gt; topScore \u0026amp;\u0026amp; !result.getWord().contains(word) \u0026amp;\u0026amp; !result.getWord().contains(\u0026#34; \u0026#34;) ){ topScore = result.getScore(); topWord = result.getWord(); } } return topWord; } The last thing to do is connecting the multiple Mono to create a reactive sentence processing. This can be done with the .zipWith method.\npublic Mono\u0026lt;String\u0026gt; getSynonymSentence(String sentence) { String[] split = sentence.split(\u0026#34; \u0026#34;); Mono\u0026lt;String\u0026gt; synSentence = Mono.just(\u0026#34;\u0026#34;); for(String word : split){ synSentence = synSentence.zipWith(getSynonym(word), (w1, w2) -\u0026gt; w1 + \u0026#34; \u0026#34; +w2); } return synSentence; } The resulting service is fully reactive and asynchronous. All the API calls happen at once and the response is assembled in an orderly fashion.\nTrying out the service It is time to try out the service. Some of the sentences are translated in a nonsensical way, while others are rather entertaining! Here is a good selection:\nJava is a good language -\u0026gt; Cofee is a right speech I like to sleep -\u0026gt; One care to rest To be or not to be -\u0026gt; To work or not to work (this is disturbing) You must be the change you wish to see in the world -\u0026gt; You have work the shift you bid to look fashionable the man (what?) The person who reads too much and uses his brain too little will fall into lazy habits of thinking -\u0026gt; The soul who reads besides often and uses his head besides mean leave light into idle habits of thought (ok, we reached the limits here…) As you can see the translation is not perfect, but you get the point! You can do a similar service based on https://www.datamuse.com/api/. They also offer words that:\nRhyme Are homophones (sound-alike words) Popular adjectives that accompany the word Many more! If you want to clone my project, it is available on GitHub.\nConclusion Writing reactive services is easier than it seems. There is some semantics to be learned about using Mono and Flux, but that should not be a major obstacle to success. Now it is your turn to make your next service a reactive one.\n","permalink":"https://e4developer.com/posts/webflux-in-practice-asynchronous-service-with-webclient/","summary":"\u003cp\u003eBuilding reactive microservices with WebFlux is fun and easy. In this article, I will show you how to build a reactive “synonyms” service. Making asynchronous API calls with WebClient is likely the most common scenario for a real-life reactive microservice.\u003c/p\u003e\n\u003ch2 id=\"synonyms-service--the-idea\"\u003eSynonyms service – the idea\u003c/h2\u003e\n\u003cp\u003eI want to build a service that will return a synonym for a given word. Based on that I would like this service to translate a sentence into another one made completely out of synonyms. For example, I will have: \u003cem\u003eJava is a good language\u003c/em\u003ebecome \u003cem\u003eCoffee is a right speech\u003c/em\u003e. It is somewhat entertaining and a nice example!\u003c/p\u003e","title":"WebFlux in practice - asynchronous service with WebClient"},{"content":"Your company wants to adopt microservices. You are either really happy or terrified. A change like this can be great for those wanting to learn and improve their systems, but it does not come without its perils. If you want to be successful you will have to be pragmatic…\nUndoubtedly you have heard about microservices by now. I have seen many people skeptical about adopting the pattern and its potential value. The good news is- you can be pragmatic about the adoption, taking what works best and at the pace that suit you.\nWhy are microservices even possible? First of all- microservices are only becoming popular now because only now they became practical. In order to get microservices to work well you need a few things:\nMicroservices friendly frameworks – like Spring Boot or Javalin (and more) Easy deployment – possibly with Docker or other containers Easy provisioning – cloud computing plays a role here Good monitoring If these were available earlier, someone would have tried SOA as microservices. Speaking of SOA…\nService Oriented Architectures – an established idea If you have been working with enterprise software for a while there is a good chance that you are familiar with the Service Oriented Architecture – SOA.\nSOA is an approach to your architecture that is focusing on delivering specialized, decoupled services. Because of the technical limitations, we used to implement SOA on large enterprise servers, using something like enterprise service bus. It was not ideal, but sometimes it worked really well.\nWith microservices, you can have a similar mentality. You want to focus on delivering services, don’t worry too much about the “micro” part. With that approach, you may build something that you are more comfortable with, using much more modern tools and approach.\nDon’t stress about the size This is the most important advice I can give you. Some people obsess over the “micro” part of microservices. Making them too small is often worse than making them slightly larger than necessary.\nIf you want to be “scientific” about the way you think about responsibilities of your microservices, think of Bounded Contexts. And if in doubt- making a microservice slightly larger than necessary is not the end of the world.\nYou want to be pragmatic here. If you make them too small, you may be overwhelmed by the intense communication that happens between your services.\nMake it easy for yourself If you adopt microservices with the idea of getting everything right from the start you are setting yourself an incredibly hard task. I recommend you start with what’s easy. Here are some ideas:\nMove away from Spring to Spring Boot If you are not a Spring user, try a microframework like Javalin or Micronaut If you are considering Choreography- do not go all in from the start. I gave a talk on that topic. Focus on your DevOps capability- it can be seen as a prerequisite for success with full-scale microservices approach. I write more about it in this blog post for Scott Logic. If you don’t want to go all in with DevOps- focus on a solid Continous Integration pipeline, this is what will give you most value. For cloud integration- adopt some technologies from the Spring Cloud suite Try microservices by separating the easily separable part of your system, or when adding a new, separate, service You get the idea! Choose what works for you Beyond making it easy, it is important to recognize that microservices are not a tick-boxing exercise. Due to the complexity of the task, what works for you may differ to what the established “best practices” say.\nIt is important to be aware of the “best practices” so whenever you chose to do something slightly different it is by choice, not by accident.\nI would be especially careful when trying to adopt CQRS/Event Sourcing or other potentially difficult patterns. Don’t feel pressured to adapt Kafka if you have ActiveMQ that works well for you. Reactive microservices with WebFlux may be an overkill etc. I hope you get the idea.\nThere is no need to remove everything that you have currently and replace it with the flashiest and most hyped solutions. If you have alternatives that work well for you- consider keeping them.\nAvoid common pitfalls Despite the best intentions, you may fall into one of the common traps of microservices. This is not an easy approach, so you should familiarise yourself with Common Technical Debt in Microservices.\nFor details, you can read the original article, but it summarises as:\nMicroservices Configuration – Done Badly The existence of a God Library Poorly implemented security Highly coupled services Deployment being separated from the developers Poorly implemented APIs for Orchestration Avoiding Choreography at all cost There are also potential problems with specific technologies that you may choose. Make sure you do your homework and chose your tech smart. After all, microservices is the architecture of choices.\nSummary Microservices may be difficult, but you are not obliged to do everything at once. With this new architecture, style came a wealth of technologies and techniques that can help any service oriented architecture. Even if you are not going with microservices, familiarise yourself with this approach, as the amount of innovation and useful tools is staggering!\n","permalink":"https://e4developer.com/posts/adopting-microservices-pragmatic-advice/","summary":"\u003cp\u003eYour company wants to adopt microservices. You are either really happy or terrified. A change like this can be great for those wanting to learn and improve their systems, but it does not come without its perils. If you want to be successful you will have to be pragmatic…\u003c/p\u003e\n\u003cp\u003eUndoubtedly you have heard about microservices by now. I have seen many people skeptical about adopting the pattern and its potential value. The good news is- you can be pragmatic about the adoption, taking what works best and at the pace that suit you.\u003c/p\u003e","title":"Adopting Microservices - Pragmatic Advice"},{"content":"Kafka seems to only be gaining in popularity. A few years ago you could mostly see it in Big Data engineering context. These days, Kafka is starting to power more common message-oriented architectures. In this article, I want to give you a basic introduction to working with Spring Boot and Kafka.\nInstalling Kafka on your machine One common barrier to entry for people to hack around on their machines with Kafka is how tricky the installation can be. For that very reason, I have written “How to easily run Kafka with Docker for development”– I think you will find it especially useful if you are on Windows!\nIf you are running macOS or Linux, you can still follow the aforementioned tutorial, but you could also run Kafka and Zookeeper without Docker. To do that I recommend Confluent.io platform and their Quick Start tutorial.\nEnabling Kafka in Spring Boot Assuming that you have Kafka accessible on kafka:9092what follows is basic instruction on integrating your Spring Boot application with Kafka.\nWith Spring Boot, to use Kafka, you need a single dependency added to your POM file (or equivalent if using Gradle):\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.kafka\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-kafka\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; You will also need to set a couple properties in your application.properties file:\nspring.kafka.consumer.group-id=kafka-intro spring.kafka.bootstrap-servers=kafka:9092 You can customize how to interact with Kafka much further, but this is a topic for another blog post. If you need more in-depth information, check the official reference documentation.\nConnecting Spring Boot with Kafka In order to send messages, you will need to @Autowire KafkaTemplate. Once you have access to an instance of that KafkaTemplate, publishing messages to a topic becomes trivial!\n@Autowired private KafkaTemplate\u0026lt;String, String\u0026gt; kafkaTemplate; public void send(String topic, String payload) { kafkaTemplate.send(topic, payload); System.out.println(\u0026#34;Message: \u0026#34;+payload+\u0026#34; sent to topic: \u0026#34;+topic); } Listening to messages is equally easy. You will need to create @KafkaListener and choose a topic that you want to listen to:\n@KafkaListener(topics = \u0026#34;topic1\u0026#34;) public void receiveTopic1(ConsumerRecord\u0026lt;?, ?\u0026gt; consumerRecord) { System.out.println(\u0026#34;Receiver on topic1: \u0026#34;+consumerRecord.toString()); } We are doing only very basic operations here, but the simplicity is undeniable. Once you have your local Kafka configured, and you can publish and listen to messages- you can develop it further as necessary.\nPutting it all together Putting the publisher and a few listeners together I have created an example Spring Boot application that is available as a GitHub project.\nYou can clone the project and if you have Kafka running on your machine- you can try it yourself.\nSpring Cloud Stream and Kafka In the previous section, we looked at the direct integration between Spring Boot and Kafka. If you wish to abstract your messaging layer from the application logic, you could use the Spring Cloud Stream approach.\nThis Spring Cloud Stream and Kafka integration is described very well in the Kafka Streams and Spring Cloud Stream just recently published on the spring.io blog.\nAnother way that Kafka comes to play with Spring Cloud Stream is with Spring Cloud Data flow. This abstracts the use of Kafka nearly entirely and can be interesting if you want to build an ETL or some batch processing. I wrote an introduction to Spring Cloud Data Flow and looked at different use cases for this technology.\nSummary Kafka can be an intimidating technology. However, with Docker and Spring Boot the barrier to entry is lower than you might have suspected. Just remember- this article should get you started, to really master Kafka, you need to learn much more!\n","permalink":"https://e4developer.com/posts/getting-started-with-kafka-in-spring-boot/","summary":"\u003cp\u003eKafka seems to only be gaining in popularity. A few years ago you could mostly see it in Big Data engineering context. These days, Kafka is starting to power more common message-oriented architectures. In this article, I want to give you a basic introduction to working with Spring Boot and Kafka.\u003c/p\u003e\n\u003ch2 id=\"installing-kafka-on-your-machine\"\u003eInstalling Kafka on your machine\u003c/h2\u003e\n\u003cp\u003eOne common barrier to entry for people to \u003cem\u003ehack around\u003c/em\u003e on their machines with Kafka is how tricky the installation can be. For that very reason, I have written \u003ca href=\"https://e4developer.com/posts/how-to-easily-run-kafka-with-docker-for-development/\"\u003e\u003cem\u003e“How to easily run Kafka with Docker for development”\u003c/em\u003e\u003c/a\u003e– I think you will find it especially useful if you are on Windows!\u003c/p\u003e","title":"Getting Started with Kafka in Spring Boot"},{"content":"Kafka is becoming a popular addition to microservice oriented architectures. Despite its popularity, it may be tricky to run it on your development machine- especially if you run Windows. In this short article, I will show you a simple way to run Kafka locally with Docker.\nIn order to run Kafka, you need a Zookeeper instance and Kafka instance. You also need these two instances to be able to talk to each other.\nSetting up kafkanet Docker provides us with a concept of docker net. We can create a dedicated net on which the containers will be able to talk to each other:\ndocker network create kafka\nWith the network kafka created, we can create the containers. I will use the images provided by confluent.io, as they are up to date and well documented.\nConfiguring the Zookeeper container First, we create a Zookeeper image, using port 2181 and our kafka net. I use fixed version rather than latest, to guarantee that the example will work for you. If you want to use a different version of the image, feel free to experiment:\ndocker run –net=kafka -d –name=zookeeper -e ZOOKEEPER_CLIENT_PORT=2181 confluentinc*/cp-zookeeper:4.1.0*\nConfiguring the Kafka container With the Zookeeper container up and running, you can create the Kafka container. We will place it on the kafka net, expose port 9092 as this will be the port for communicating and set a few extra parameters to work correctly with Zookeeper:\ndocker run –net=kafka -d -p 9092:9092 –name=kafka -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 confluentinc*/cp-kafka:4.1.0*\nConnecting to Kafka – DNS editing One last catch here is that Kafka may not respond correctly when contacted on localhost:9092– the Docker communication happens via kafka:9092.\nYou can do that easily on Windows by editing the hostfile located in *C:\\Windows\\System32\\drivers\\etc\\hosts.*You want to add the line pointing kafka to 127.0.0.1. Your hostfile should look something like this:\nIf you are using OS other than Windows, you need to do an equivalent trick- pointing your kafka to 127.0.0.1.\nWith that all setup you can connect to your Kafka locally at kafka:9092! Congratulations!\nSummary This is not a production setup, rather a simple setup aimed at local development and experimenting. Once you understand how Kafka works you can customize it as you please. Hopefully, this article will save you a large amount of time I spent trying to get Dockerized Kafka to work on Windows!\n","permalink":"https://e4developer.com/posts/how-to-easily-run-kafka-with-docker-for-development/","summary":"\u003cp\u003eKafka is becoming a popular addition to microservice oriented architectures. Despite its popularity, it may be tricky to run it on your development machine- especially if you run Windows. In this short article, I will show you a simple way to run Kafka locally with Docker.\u003c/p\u003e\n\u003cp\u003eIn order to run Kafka, you need a Zookeeper instance and Kafka instance. You also need these two instances to be able to talk to each other.\u003c/p\u003e","title":"How to easily run Kafka with Docker for Development"},{"content":"Thanks to my company – Scott Logic – I recently had a pleasure of attending the Devoxx UK 2018 conference. Among many interesting talks and speakers, there were quite a few Spring celebrities present. Juergen Holler (Father of Spring), Josh Long and Mark Heckler were all there. Here are my impressions from the conference.\nSpring is hugely popular This is hardly news for anyone interested in microservices, but the popularity of Spring can’t be overstated. Every Spring talk was full and the audience was very engaged. Spring is very much the bread and butter of Server Side Java in 2018.\nThe rate of adoption of Spring 5 and Spring 2.0 was also surprising. Based on the quick audience polls it seemed that more than 50% already adopted the new versions (not necessarily in production).\nKotlin is going to be big If there is one takeaway that I got from this conferences, it is the importance of Kotlin. Every single Spring talk was mentioning Kotlin and how important it is for the Spring ecosystem.\nThis is a good news, as personally, I think Kotlin is the most Java-dev-friendly from the other JVM languages. I can imagine going with Kotlin to production with my future clients, as the Java-Kotlin interpolability is really great. Knowing that Spring supports Kotlin as a first-class citizen is reassuring.\nSpringing into Kotlin: How to Make the Magic even more Magical, with Mark Heckler If you are intrigued by this Spring-Kotlin union, you should definitely see Mark Heckler’s talk. Mark gives a good overview of how easy it is to work with Spring in Kotlin.\nThe highlight of this talk is the amount of live coding done, which really lets you see how live Spring-Kotlin development looks like. I really recommend it:\nSpring Framework 5: Feature Highlights \u0026amp; Hidden Gems, with Juergen Hoeller The second Spring talk I watched at this year’s Devoxx, was by Juergen Hoeller- Spring Framework Lead and the co-author of the original framework himself!\nThis was really a highlight of what is new in Spring 5 which mostly focused on:\nComponent Style – Programmatic bean wiring, functional style etc. Reactive Architectures – WebFlux and Reactor Hidden Gems – Watch it yourself! I highly recommend it, as it is great to see the Spring 5 overview straight from the Framework Lead himself!\nCloud Native Java, part deux, with Josh Long This was the most entertaining talk of all the three, thanks to the great showmanship provided by Josh Long. If you have never seen one of his Spring talks- you have to watch this one! Hilarious and full of insights.\nOnce again in Kotlin, Josh looks at WebFlux, changing Spring Boot logo, Project Riff and multiple other serious and not-so-serious concepts and ideas. A wild ride- definitely educational, definitely entertaining!\nThe Spring Birds of Feather The great thing is- that was not all the Spring content that was available at the conference. We were also treated to Spring Birds of a Feather session.\nUnfortunately, there is no recording of the session. It was a not-so-formal discussion among the attendees, Juergen Hoeller and Josh Long. The topics that we covered included:\nPivotal participation in the Jakarta EE efforts (mostly passive at the moment) Advantages and disadvantages of WebFlux. Advice from Juergen- if you are building a simple service with not much traffic- stick to classical Spring MVC! Optimizations of Spring Framework- different decisions that Spring development team takes to make sure that the framework loads fast and works well. Adoptions rate and challenges that different people facing when updating to the newer versions. There were not many challenges, but glad that the guys care so much! Other questions and ideas. I find it amazing that the Spring team is willing to meet face to face with the users of the framework and listen to our opinions and concerns. If you ever get a chance to attend a similar Birds of Feather session- I highly recommend it!\nSummary Devoxx UK 2018 was an outstanding conference. Normally, that information wouldn’t give you much, but this time… All the videos are available online! If you enjoy listening to high-quality technical presenters- it is your lucky day! Check all the conference videos on their YouTube channel.\n","permalink":"https://e4developer.com/posts/the-state-of-spring-in-2018-devoxx-uk-impressions/","summary":"\u003cp\u003eThanks to my company – \u003ca href=\"https://www.scottlogic.com/\"\u003eScott Logic\u003c/a\u003e – I recently had a pleasure of attending the Devoxx UK 2018 conference. Among many interesting talks and speakers, there were quite a few Spring celebrities present. \u003ca href=\"https://twitter.com/springjuergen\"\u003eJuergen Holler\u003c/a\u003e (Father of Spring), \u003ca href=\"https://twitter.com/starbuxman\"\u003eJosh Long\u003c/a\u003e and \u003ca href=\"https://twitter.com/MkHeck\"\u003eMark Heckler\u003c/a\u003e were all there. Here are my impressions from the conference.\u003c/p\u003e\n\u003ch2 id=\"spring-is-hugely-popular\"\u003eSpring is hugely popular\u003c/h2\u003e\n\u003cp\u003eThis is hardly news for anyone interested in microservices, but the popularity of Spring can’t be overstated. Every Spring talk was full and the audience was very engaged. Spring is very much the bread and butter of Server Side Java in 2018.\u003c/p\u003e","title":"The State of Spring in 2018 - Devoxx UK Impressions"},{"content":"I feel horrible today. I am sick- my throat hurts, my head is not working as it should. Hence, I decided I will tell you how to write horrible Java code. If you are tired of all these beautiful patterns and best practices and you want to write something insane- read on. Maybe you like horror stories but in code- this may appeal to you!\nIf you are looking for advice on how to write good code- look elsewhere! Check my review of “Effective Java“ and take it from there. Nothing nice is waiting for you in the following paragraphs… But if you are insisting on reading…\nStep 1 – Use Exceptions for everything You know loops right? It is so easy to make an off by one error. When you are iterating a collection, it is easy to get this wrong… Let’s see how we can use Java Exception handling to solve that issue and not worry at all about these pesky off-by-one errors!\npublic static void horribleIteration(String [] words){ int i = 0; try { while(true){ System.out.println(words[i]); i++; } } catch (IndexOutOfBoundsException e){ //iteration complete } } Step 2 – Don’t worry about access modifiers… Access modifiers in Java… What a waste of time! Did you know that making something private is just a suggestion? If you want to edit it- go for it! Nothing is really stopping you (besides maybe lack of knowledge). If that’s the case, check out this amazing technique.\npublic static void readPrivate() throws NoSuchFieldException, IllegalAccessException { Field f = System.class.getDeclaredField(\u0026#34;lineSeparator\u0026#34;); f.setAccessible(true); String separator = (String) f.get(System.class); System.out.println(\u0026#34;Line separator is \u0026#34; + separator + \u0026#34;.\u0026#34;); } We are reading lineSeparator here, which well… is not that exciting. Changing that lineSeparator yields much more fun! Look what happens to System.out.println after we change lineSeparator in this code:\npublic static void readWritePrivate() throws NoSuchFieldException, IllegalAccessException { Field f = System.class.getDeclaredField(\u0026#34;lineSeparator\u0026#34;); f.setAccessible(true); String separator = (String) f.get(System.class); System.out.println(\u0026#34;Line separator is \u0026#34; + separator + \u0026#34;.\u0026#34;); f.set(System.class ,\u0026#34;!!!\u0026#34;); System.out.println(\u0026#34;Line one\u0026#34;); System.out.println(\u0026#34;Line two\u0026#34;); System.out.println(\u0026#34;Line three\u0026#34;); } The output is:\nLine separator is WARNING: All illegal access operations will be denied in a future release . Line one!!!Line two!!!Line three!!! Looking good to me!\nStep 3 – Nothing is really final in Java… Some developers think that they have said their final word by dropping the final keyword in front of a variable… The truth is- sometimes you really want to change a value of a final field. I am not here to judge (actually- read the title, maybe I am), so here is how to do it:\npublic static void notSoFinal() throws NoSuchFieldException, IllegalAccessException, InterruptedException { ExampleClass example = new ExampleClass(10); System.out.println(\u0026#34;Final value was: \u0026#34;+ example.finalValue); Field f = example.getClass().getDeclaredField(\u0026#34;finalValue\u0026#34;); Field modifiersField = Field.class.getDeclaredField(\u0026#34;modifiers\u0026#34;); modifiersField.setAccessible(true); modifiersField.setInt(f, f.getModifiers() \u0026amp; ~Modifier.FINAL); f.setInt(example, 77); System.out.println(\u0026#34;Final value was: \u0026#34;+ example.finalValue); } public static class ExampleClass { final int finalValue; public ExampleClass(int finalValue){ this.finalValue = finalValue; } } Word of caution (ha ha ha!) this worked for me when supplying the final value in a constructor. If you have the final value set in the class, then it does not work. The code executes fine, but the value is not changed. Probably some compiler-level optimization spoiling all the fun!\nStep 4 – Use Java serialization. Just do it. This one is simple. Serialize with Java. Have fun. Enjoy it.\nOk, I guess you want some justification. Last Friday I saw Mark Reinhold – Chief Architect of Java Platform say that they regret putting Serialization in Java. Apparently, around 1/3 security flaws in Java come from Serialization alone. Also, we are meant to use JSON, or databases or something like that… In my opinion, the guy doesn’t know what he is talking about!\nGo ahead, rely on Java serialization.\nStep 5 – Use Object for everything You know Classes right? Waste of time! Do you want to see a pinnacle of code reuse? There you go!\npublic static void printThings (List things){ int i = 0; try { while(true){ System.out.println(things.get(i)); i++; } } catch (IndexOutOfBoundsException e){ //iteration complete } } List superList = new ArrayList(); superList.add(7); superList.add(\u0026#34;word\u0026#34;); superList.add(true); superList.add(System.class); printThings(superList); Can you believe we had that power for all this time? Also, bonus point for combining two patterns!\nThis is just the beginning of what you can do with Object. Remember if in doubt- use Object. You can always cast back if needed with this amazing pattern!\npublic static void printThingsUppercaseStrings (List things){ int i = 0; try { while(true){ Object o = things.get(i); System.out.println(o); if(o.getClass() == String.class){ String so = (String) o; so = so.toUpperCase(); System.out.println(so); } i++; } } catch (IndexOutOfBoundsException e){ //iteration complete } } And this is type-safe. What a robust solution.\nStep 6 – Fully embrace the art of convenient programming Did you know that Bill Gates prefers lazy developers? Bill actually said:\n“hire a lazy person to do a difficult job (…)because a lazy person will find an easy way to do it”\nSo with that glaring endorsement of Bill Gates, we can fully embrace our laziness. Are you ready? Here we go!\nNever write tests, just don’t write bugs! Make everything public – convenient access! Favor global variables – you may need them! Prefer large interfaces to small specialized ones – the more methods you can use the better! Favor inheritance over composition (with default methods in interfaces it has never been easier)! Always use boxed primitives – they work as Objects as well (Step 5)! Use the shortest names possible for everything (a, b, val are great)! Step 7 – Don’t learn anything new – you always know best The most important quality a programmer can have is faith in himself. Ideally, a blind faith that she knows everything best and there is nothing more to learn!\nWith that in mind, make sure to never learn:\nNew libraries New languages New frameworks It will save you time! You should never learn anything new, as you are already the best.\nDisclaimer: I am really sick. After reading these steps make sure to read the title again: “How to write horrible Java”. To close this article off, let’s remember this English phrase:\nJust because you can doesn’t mean you should\n","permalink":"https://e4developer.com/posts/how-to-write-horrible-java/","summary":"\u003cp\u003eI feel horrible today. I am sick- my throat hurts, my head is not working as it should. Hence, I decided I will tell you how to write horrible Java code. If you are tired of all these beautiful patterns and best practices and you want to write something insane- read on. Maybe you like horror stories but in code- this may appeal to you!\u003c/p\u003e\n\u003cp\u003eIf you are looking for advice on how to write good code- look elsewhere! Check my \u003ca href=\"https://e4developer.com/posts/effective-java-microservices-require-effective-java/\"\u003ereview of \u003cem\u003e“Effective Java\u003c/em\u003e“\u003c/a\u003e and take it from there. Nothing nice is waiting for you in the following paragraphs… But if you are insisting on reading…\u003c/p\u003e","title":"How to write horrible Java"},{"content":"Writing good software requires using the right tools. Choosing the right frameworks, libraries and designing smart systems. With all that to learn and worry about, it is easy to forget about another very important thing: using your programming language wisely. In this article, I want to introduce you to *“Effective Java”*by Joshua Bloch. Effective Java– back to “basics” With the JVM ecosystem moving faster than ever, it is easy for us to forget that writing good software is more than using frameworks correctly.\nThere are some key things we should always consider when writing code. These things are largely independent of the framework we use. I always like to ask these questions when performing code review:\nIs the code readable and easy to understand? Is the code maintainable? Is the code correct? Does it follow agreed best practices? If you are an experienced Java developer, you may have developed an instinct and knowledge that helps you answer these questions. But how do you get better at it?\nImagine that you could have one of the best Java developers in the world explain to you how they answer these questions? Joshua Bloch, one of the main authors of Java Collections Framework certainly qualifies as world-class. He also compiled a list of 90 Itemsworth considering when writing Java. Having him explain to you how to write effective Java is the premise of this book.\nOnce you read through these rules and understand the reasoning behind them, you will really start to feel like you are becoming a native speaker of the Java language.\nSo what about microservices, why is it related? I have spent the last two years of my professional life working with Spring Boot and Grails based microservices. Both are great technologies (I prefer Spring Boot if you are asking), that enable you to deliver value rapidly… Neither of these technologies excuses you from writing bad code!\nIn my experience, microservices architectures are quite difficult. There are a lot of moving parts and the integration between different services can prove challenging. That only emphasizes the need to write absolutely rock-solid code in your services.\nWith the complexities of the architecture, you want the services to be simple. In order for them to be simple, you not only have to divide your domain model correctly, you also need to write clean maintainable code.\nThe speed that we get from modern microservices frameworks, should not stop us from writing quality code. Chances are that the service will be written quickly, but it may be maintained for years. Developers spend much more time reading code than writing new code. Let’s do everyone a favour and write Java Microservices using the native speaker version of Java.\nJava is not a new language, we know what good Java looks like. With the update of “Effective Java” to cover Java 9, you get an expert advice on how to write good modern Java.\nWhat the book covers What exactly the book covers? Given that you can look up the index of the book on Amazon, I feel that I can share it here as well. You get 12 information-packed sections:\nIntroduction – well, this one is not so information packed! Creating and Destroying Objects – basic and crucial to pretty much any Java application. Methods Common to All Objects – ABC ofdealing with Java Objects. **Classes and Interfaces –**good overview of OOP practices in Java. **Generics –**a Deeper look into generics and polymorphism. **Enums and Annotations –**explanation of the often misunderstood and underused features of the language. **Lambdas and Streams –**how to deal with the new feature that we got with Java 8. **Methods –**good rules for working with methods explained. **General Programming –**mix of general programming recommendations. **Exceptions –**a guide to dealing with the ever confusing Java Exception framework. **Concurrency –**solid intro to Java Concurrency and best practices. **Serialization –**serializing Java Objects. As you can see, the book subject domain is very broad. It stands out from many others, as it manages to stay deep and insightful despite that. This is achieved by picking specific Items and examining them in-depth. Take for example:\n44. Favor the use of standard functional interfaces. – Where we get a deep look at functional interfaces in Java and best practices around their use. One of the new and interesting additions from Java 8, that I don’t think is widely enough used or understood.\nYou get 89 other Items that each gets a few pages of in-depth explanations and discussion.\nSummary *“Effective Java”*by Joshua Bloch is one of the best books on Java that I have ever read. If you are writing Java in any context, I can’t recommend that book enough to you.\nWhile always chasing the latest and most exciting new frameworks and architectures, sometimes it is good to slow down. It is good to look back at basics and make sure that we stand on a solid foundation. *“Effective Java”*can give you that foundation.\n","permalink":"https://e4developer.com/posts/effective-java-microservices-require-effective-java/","summary":"\u003cp\u003eWriting good software requires using the right tools. Choosing the right frameworks, libraries and designing smart systems. With all that to learn and worry about, it is easy to forget about another very important thing: using your programming language wisely. In this article, I want to introduce you to *“Effective Java”*by Joshua Bloch. \u003c/p\u003e\n\u003ch2 id=\"effective-java-back-to-basics\"\u003e\u003cem\u003eEffective Java\u003c/em\u003e– back to “basics”\u003c/h2\u003e\n\u003cp\u003eWith the JVM ecosystem moving faster than ever, it is easy for us to forget that writing good software is more than using frameworks correctly.\u003c/p\u003e","title":"Effective Java Microservices require Effective Java"},{"content":"In the last month, I spent a lot of time writing about Reactive Spring Boot. I believe that Reactive Microservices are only going to grow in popularity, so I really recommend you to get to know them!\nBeyond that, you can see some other interesting technologies like Javalin and Spring Data being showcased. These can help you with making your microservices architecture even more robust and easy to work with.\nThis time I decided to include a few other articles that I have really enjoyed reading from websites different than E4developer. I hope you will like them as well!\nEnjoy your reading!\nNew Articles Reactive Microservices with Spring Boot:\nGetting Reactive with Spring Boot 2.0 and Reactor WebFlux and servicing client requests – how does it work? Spring’s WebFlux / Reactor Parallelism and Backpressure Microservices architecture:\nMicroservices – the Architecture of Choices Practical advice:\nHow to stay up to date with Java and Tech? Use Twitter! Amazing technologies:\nJava WebSockets made simple with Javalin Quick setup for Spring Cloud Data Flow with Docker Compose Spring Data – Microservices Data Companion Also, Worth Reading Automatic Spring Boot project redeploy with DevTools – by Adrian Marszalek Guide to Spring @Autowired – by Baeldung Top 10 Things To Do With GraalVM – by Chris Seaton Comparing Apache Kafka, Amazon Kinesis, Microsoft Event Hubs and Google Pub/Sub – by Andrew Carr ","permalink":"https://e4developer.com/posts/e4developer-newsletter-may-2018-number-3/","summary":"\u003cp\u003eIn the last month, I spent a lot of time writing about Reactive Spring Boot. I believe that Reactive Microservices are only going to grow in popularity, so I really recommend you to get to know them!\u003c/p\u003e\n\u003cp\u003eBeyond that, you can see some other interesting technologies like Javalin and Spring Data being showcased. These can help you with making your microservices architecture even more robust and easy to work with.\u003c/p\u003e","title":"E4developer Newsletter - May 2018 - Number 3"},{"content":"Spring Data is one of the flagship projects of the Spring ecosystem. If you need to work with data- be it SQL, non-SQL, using map-reduce or other, Spring Data most likely has you covered. In this article, I will introduce the Spring Data project and explain how it makes writing microservices easier.\nWorking with data is at the core of software development. This data can be in different forms:\nRelational / SQL Database NoSQL Database Graph Databases (like Neo4j) LDAP records Distributed cache technologies (Redis) Other technologies and variations of the above How great would it be to have a single technology that you could rely on when dealing with any of the above? Well, it is your lucky day- Spring Data can help you with all these and more!\nOverview of Spring Data Spring Data is an umbrella project. It contains multiple different projects that will help you to integrate with any of the aforementioned data sources.\nThe project provides a common way of building these integrations. The official project site states:\nSpring Data’s mission is to provide a familiar and consistent, Spring-based programming model for data access while still retaining the special traits of the underlying data store.\nBefore going into details, let’s see what makes Spring Data particularly useful for microservices architectures.\nWhy is Spring Data good for microservices? Seamless integration with Spring Boot – This is itself is a killer feature. Knowing how popular Spring Boot is, having the project integrate with it well will make any developers life easier.\nA large number of technologies supported – Because of all the different technologies that are supported, using Spring Data brings familiarity to often complicated and not commonly known technologies. Different microservices often interact with different databases in unique ways.\nFocus on usability and brevity – One of the big principles behind microservices is being micro… We don’t want to have an overwhelming configuration to deal with each time we build a new service. Spring Data coupled with Spring Boot really helps here.\nI have written an article in 2016 for Scott Logic showcasing how easy it makes working with MongoDB. In a workshop I ran, I was able to get all the integration up and running with live coding under 45 minutes while explaining every step!\nCommon Integrations To really understand why Spring Data is so useful, here are a few examples of how easily we can integrate with Spring Boot, Data, and a few popular database technologies.\nSpring Data and MongoDB MongoDB is a very popular database. Let’s see how easy is to have Spring Data work with it.\nWe start by adding the required dependency to POM:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-data-mongodb\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; Now we can define the class representing data that we want to store. Let’s create a Product class:\npublic class Product { @Id public String id; public String name; public String description; public Product () {} public Product (String name, String description) { this.name = name; this.description= description; } } With Spring Data Repositories you can define additional methods for retrieving and working with objects:\npublic interface ProductRepository extends MongoRepository\u0026lt;Product, String\u0026gt; { public List\u0026lt;Product\u0026gt; findByName(String name); } This repository can later be used in the following fashion:\n@Autowired private ProductRepository repository; //Listing all products repository.findAll(); //Finding products by name repository.findByName(someName); //Saving products repository.save(product); The connection details can be easily configured as necessary in the usual Spring Boot way.\nSpring Data and MySQL MySQL, being a very popular SQL database, will serve as our example for the common integration number two.\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-data-jpa\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;mysql\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;mysql-connector-java\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; The Class changes only slightly from the MongoDB example, to accommodate the JPA specific @Entity and @GeneratedValue annotation.\n@Entity public class Product { @Id @GeneratedValue(strategy=GenerationType.AUTO) public String id; public String name; public String description; public Product () {} public Product (String name, String description) { this.name = name; this.description= description; } } The MySQL repository will extend the interface CrudRepositoryrather than MongoRepository. MongoRepositoryis a specialization of CrudRepository.\npublic interface ProductRepository extends CrudRepository\u0026lt;Product , Long\u0026gt; { } Using this MySQL based repository is pretty much identical to the way we used the MongoDB one. This is one of the great benefits of Spring Data in action!\n@Autowired private ProductRepository productRepository; productRepository.findAll(); productRepository.save(someProduct); You can configure the connectivity to the MySQL database with Spring Boot properties.\nCommon Integration Summary and Further Reading MongoDB and MySQL served here as examples for the point I am trying to make. You can use SQL and NoSQL databases in very similar fashion with Spring Data. The familiarity Spring Data brings really helps developers to become database agnostic.\nYou can find further details on working with MongoDB and MySQL here:\nAccessing Data with MongoDB by Spring.io Accessing Data with MySQL by Spring.io Familiar syntax with unfamiliar technologies One of the big benefits of using Spring Data is how well different and innovative technologies get integrated with the project. It makes learning new things feel familiar. Let’s have a look at Neo4J as an example.\nSpring Data and Neo4j This time we require a single dependency:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-data-neo4j\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; Defining entities is quite different because the concepts from Neo4j are slightly different to those in more traditional databases.\n@NodeEntity public class Product { @Id @GeneratedValue public String id; public String name; public String description; public Product () {} public Product (String name, String description) { this.name = name; this.description= description; } @Relationship(type = \u0026#34;COMPATIBLE\u0026#34;, direction = Relationship.UNDIRECTED) public Set\u0026lt;Product\u0026gt; compatibleProducts; public void compatibleWith(Product product) { if (compatibleProducts == null) { compatibleProducts = new HashSet\u0026lt;\u0026gt;(); } compatibleProducts.add(product); } } From this point onward things get extremely similar. You define CrudRepositorythe same you would do it for MySQL and you can interact with the repository through the same interface. The configuration is handled by setting Spring Boot properties.\nOverall, using Neo4j with Spring Data feels very close to using it with any other database. This is a common theme with Spring Data. While you can rely on the specializations that different data technologies bring, what can be done in a common way is done in a common way.\nSummary Using Spring Data bring multiple benefits to your project. You get the ease of development and a multitude of well-supported integrations. If you are already using Spring Boot, I don’t see why you would not make use of Spring Data. If you are not using Spring Boot, Spring Data gives yet another reason to start!\n","permalink":"https://e4developer.com/posts/spring-data-microservices-data-companion/","summary":"\u003cp\u003eSpring Data is one of the flagship projects of the Spring ecosystem. If you need to work with data- be it SQL, non-SQL, using map-reduce or other, Spring Data most likely has you covered. In this article, I will introduce the Spring Data project and explain how it makes writing microservices easier.\u003c/p\u003e\n\u003cp\u003eWorking with data is at the core of software development. This \u003cem\u003edata\u003c/em\u003e can be in different forms:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003eRelational / SQL Database\u003c/li\u003e\n\u003cli\u003eNoSQL Database\u003c/li\u003e\n\u003cli\u003eGraph Databases (like Neo4j)\u003c/li\u003e\n\u003cli\u003eLDAP records\u003c/li\u003e\n\u003cli\u003eDistributed cache technologies (Redis)\u003c/li\u003e\n\u003cli\u003eOther technologies and variations of the above\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eHow great would it be to have a single technology that you could rely on when dealing with any of the above? Well, it is your lucky day- \u003cstrong\u003eSpring Data can help you with all these and more\u003c/strong\u003e!\u003c/p\u003e","title":"Spring Data - Microservices Data Companion"},{"content":"Spring Cloud Data Flow requires quite a few dependencies in order to run it. In this blog post, I will show you Docker Compose tool and how it can be used to make that setup easy.\nI have written an introduction to Spring Cloud Data Flow where in order to run the Data Flow server, you need to have 3 other Docker containers running.\nThis is not that bad, but imagine if you had to have more dependencies? Or if you want to have that process easily replicable? Sharing that setup with other developers on the team? You can see that it would be good to have a better way of doing this…\nIntroducing Docker Compose If you are looking for the detailed documentation of Docker Compose, you can find it here on the official site. What I want to give you here is a quick and practical introduction that will get you using the tool in no time!\nDocker Compose is perfectly summarised by the authors of the tool themselves:\nCompose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.\nWe basically have three steps to use a Docker Compose:\n1. Decide which containers you want to run:\nLet’s say that we want to run the redis:latest image and mongodb version 3.4. Imagine that this is exactly what is required to get our development environment working.\n2. Define the docker-compose.yml that describes the containers to be created:\nIn this case, docker-compose.yml would look like that:\nversion: \u0026#39;3\u0026#39; services: my-redis: image: redis:latest mongo-3.4: image: mongo:3.4 ports: - \u0026#34;27017:27017\u0026#34; 3. Run the docker.compose.yml with the command docker-compose up:\nIn order to start the containers in the background add -d to your command. Now run docker-compose up -d and voila!\nSetting up Spring Cloud Data Flow with Docker Compose The idea to do that came from a company presentation done by my colleague Jan Akerman that later published his code example as a GitHub project.\nInstead of having to set-up every docker container separately, let’s see what is needed: Redis, MySQL, and RabbitMQ. With that knowledge, we can write the docker-compose.yml:\nversion: \u0026#39;3\u0026#39; services: rabbitmq: image: rabbitmq:3-management ports: - \u0026#34;15672:15672\u0026#34; - \u0026#34;5672:5672\u0026#34; expose: - \u0026#34;15672\u0026#34; - \u0026#34;5672\u0026#34; mysql: image: mysql:5.7 environment: MYSQL_DATABASE: scdf MYSQL_USER: root MYSQL_ROOT_PASSWORD: dataflow ports: - \u0026#34;3306:3306\u0026#34; expose: - 3306 redis: image: redis:2.8 ports: - \u0026#34;6379:6379\u0026#34; expose: - \u0026#34;6379\u0026#34; Now, in the same directory that you have that docker-compose.yml run the docker-compose up -d command. You should see something like that after listing containers with docker ps:\nWith that running, download the Local Spring Cloud Data Flow server from this link. You can start the Spring Cloud Data Flow with:\njava -jar spring-cloud-dataflow-server-local-1.3.0.RELEASE.jar --spring.datasource.url=jdbc:mysql://localhost:3306/scdf --spring.datasource.username=root --spring.datasource.password=dataflow --spring.datasource.driver-class-name=org.mariadb.jdbc.Driver --spring.rabbitmq.host=127.0.0.1 --spring.rabbitmq.port=5672 --spring.rabbitmq.username=guest --spring.rabbitmq.password=guest\nJust make sure that you are using Java 8 as this version of Spring Cloud Data Flow does not work well with newer versions!\nYou should be able to visit now: http://localhost:9393/dashboard/#/apps and see the Spring Cloud Data Flow running:\nSummary Docker Compose is a very useful tool when you need to spin multiple Docker containers in order to get your project running. It can help you share your more complicated setups like the one for Spring Cloud Data Flow here.\nGoing forward I will start using it for my blog posts, to make it easier for others to follow the examples.\nOne thing that could be improved here is to use Spring Cloud Data Flow server as a Docker container itself, but this one is for another time!\n","permalink":"https://e4developer.com/posts/quick-setup-for-spring-cloud-data-flow-with-docker-compose/","summary":"\u003cp\u003eSpring Cloud Data Flow requires quite a few dependencies in order to run it. In this blog post, I will show you Docker Compose tool and how it can be used to make that setup easy.\u003c/p\u003e\n\u003cp\u003eI have written an \u003ca href=\"https://e4developer.com/posts/getting-started-with-spring-cloud-data-flow/\"\u003eintroduction to Spring Cloud Data Flow\u003c/a\u003e where in order to run the Data Flow server, you need to have 3 other Docker containers running.\u003c/p\u003e\n\u003cp\u003eThis is not that bad, but imagine if you had to have more dependencies? Or if you want to have that process easily replicable? Sharing that setup with other developers on the team? You can see that it would be good to have a better way of doing this…\u003c/p\u003e","title":"Quick setup for Spring Cloud Data Flow with Docker Compose"},{"content":"Spring Boot 2.0 (and Spring 5) introduced WebFlux as a way to build reactive Microservices. WebFlux is built using Reactor, which introduces completely new ideas to Spring Boot parallelism. Backpressure, Schedulers, and Parallel Flux are a few concepts that we will look at closer in order to understand how to make the most of our reactive services.\nI have recently written articles on Getting Started with WebFlux and Concurrency in Spring Boot. One thing that I did not explore enough in these articles were the concurrency implications of building a WebFlux based reactive microservice.\nIf you are completely new to WebFlux I recommend reading the previously mentioned articles. If you have the basic ideas down- let’s see how we can make the best use of concurrency in this framework!\nWhat is Backpressure? WebFlux is based on Reactor, which is a reactive-stream implementation. One of the main selling points of reactive-streams is handling of the backpressure. But what is backpressure?\nBackpressure is a way of dealing with a data stream that may be too large at times to be reliably processed. The goal is to feed the data to subscribers at the rate at which they can reliably deal with that data. The unprocessed data can be buffered (or we could choose different strategy), hence the pressure analogy! Think of a water pressure and a firefighter’s hose as in the featured picture. Firefighter only lets as much water out as she can deal with.\nLet’s get more technical. The idea behind reactive streams is to enable the pull-push hybrid approach to data streams. A subscriber can request a specific amount of data, while the source can push that data in a configured way. If the data stream is too large, the data waiting for processing is handled by a buffering strategy. The illustration of that can be seen in the picture below:\nEnough theory, let’s see how these ideas translate to Reactor and WebFlux.\nHow to limit the number of items being processed? One of the main ways of dealing with backpressure is implementing a custom BaseSubscriber that deals with requesting data as necessary. The basic idea looks like this:\npublic class BackpressureReadySubscriber\u0026lt;T\u0026gt; extends BaseSubscriber\u0026lt;T\u0026gt; { public void hookOnSubscribe(Subscription subscription) { //requested the first item on subscribe request(1); } public void hookOnNext(T value) { //process value //processing... //once processed, request a next one //you can implement specific logic to slow down processing here request(1); } } And then, you can subscribe to a source as per usual:\nBackpressureReadySubscriber\u0026lt;String\u0026gt; bSubcriber = new BackpressureReadySubscriber\u0026lt;\u0026gt;(); Flux\u0026lt;String\u0026gt; source = stringbasedSource(); source.subscribe(bSubcriber); When manually creating a subscriber, make sure to request enough data so that your Flux does not get stuck. You want to have at least one request()being called from the hookOnNext() method.\nWhat about the parallelism? What happens when you request more data to be processed? Can you have more data processed asynchronously? Even if you attempt to get more data being processed in parallel by calling request() from your custom BaseSubscriber it won’t work unless you are using a ParallelFlux.\nThe good news is- getting a ParallelFlux is simple! All you need to do is to call the parallel() method on the standard Flux as in the example below:\nFlux.range(1, 1000) .parallel(8) .runOn(Schedulers.parallel()) .subscribe(i -\u0026gt; System.out.println(i)); Calling the parallel() method may not be enough for parallelism if you don’t have enough threads to allocate your workload to. This is nicely explained in the Reactor reference documentation:\nTo obtain a ParallelFlux, you can use the parallel() operator on any Flux. By itself, this method does not parallelize the work. Rather, it divides the workload into “rails” (by default, as many rails as there are CPU cores).\nSo how can we enable for this parallelized work to be executed in parallel? One thing that you may notice in the code above is the use of .runOn(Schedulers.parallel())…\nHow to deal with threads – introducing Schedulers Threads always were the core tool for dealing with multi-threading in JVM. Even when dealing with a modern framework such as Reactor, there is no escape from the reality that the work has to happen on some specified thread.\nReactor gives you the power to choose which threads to allocate to specific tasks so that you run an optimal amount of threads for your server. Schedulers are a concept from Reactor that lets you specify with which thread pool will a task be executed.\nTo give you an idea of things at your disposal let’s look at available Schedulers:\nSchedulers.immediate() – the current thread Schedulers.single() – a single reusable thread. This will re-use the same single thread until the Scheduler is disposed of. Schedulers.newSingle() – a single, dedicated thread. Schedulers.elastic() – creates new worker pool as needed and reuses the idle ones. Idle threads (default 60s) are disposed of. Schedulers.parallel() – you can create a specific number of threads for that Scheduler. It defaults to your CPU cores. As you can see, you have plenty of flexibility in deciding how you will allocate your work across different Schedulers. With this flexibility comes a requirement of understanding these concepts. You need to make sure that all developers working with the code know about your scheduling strategies to get the most out of reactive services.\nWhat about the overflow? The standard way of dealing with overflow in your backpressure is to buffer that data. Normally, you expect your processing power to eventually be fast enough to deal with whatever comes its way.\nWhat happens if this is not the case? If you are dealing with a stream that can consistently overwhelm your consumers? If you try to buffer it, that buffer will grow forever resulting in a foreseeable OutOfMemoryError.\nDon’t worry! Reactor has you covered. If you are dealing with one of those tricky cases, you may create your own Flux, choosing a viable overflow strategy. Creating Flux and details of that deserves its own article, so I will simply refer you to the Reactor Flux reference documentation.\nWhat overflow strategies are at your disposal? Here is the list:\nIGNORE – Ignores downstream requests and pushes the data anyway. That may result in IllegalStateException so think twice before using it! ERROR – throws IllegalStateException if the downstream can’t keep up. DROP – drops the signal if the downstream can’t receive it. LATEST – only allows the latest signal from upstream. BUFFER – the default – buffers all signal until you run out of memory. If you are interested in dealing with streams potentially too large to process (think massive analytics, twitter stream) you may look into DROP and LATEST to still build a service that works.\nHot and Cold publishers The last topic worth looking into when exploring Reactor parallelism is the idea of Hot and Cold publishers.\nMost of the publishers that you see in these examples are cold publishers. Cold means, that the data will be generated a new with each subscription. In this case- subscription generates data. As Reactor says: nothing happens until you subscribe.\nIn contrast to that, hot publishersdo not depend on subscribers. They can publish data all the time not caring if any subscriber is there. In hot publishers, subscribers will only see the data published after they subscribed. This is not true for cold publishers, where all the data is available (or created on subscription).\nIt is important to be aware of these two ideas, as they can massively impact the load that you are forecasting on your subscribers. Creating hot subscribers is explained in the Hot vs Cold section of the reference documentation.\nMost of the streams you will create will by default be of the cold kind.\nSummary There is a lot of ground to cover here. In order to be confident that you understood how threading, parallelism and backpressure all work together in WebFlux / Reactor make sure you understand:\nWhat is backpressure How to limit the number of data requested How to limit to deal with buffer/overflow What role parallelism plays What are Schedulers What are Hot and Cold publishers If you understand these concepts well, you are well on your way to mastering handling backpressure in reactive microservices!\n","permalink":"https://e4developer.com/posts/springs-webflux-reactor-parallelism-and-backpressure/","summary":"\u003cp\u003eSpring Boot 2.0 (and Spring 5) introduced WebFlux as a way to build reactive Microservices. WebFlux is built using Reactor, which introduces completely new ideas to Spring Boot parallelism. Backpressure, Schedulers, and Parallel Flux are a few concepts that we will look at closer in order to understand how to make the most of our reactive services.\u003c/p\u003e\n\u003cp\u003eI have recently written articles on \u003ca href=\"https://e4developer.com/posts/getting-reactive-with-spring-boot-2-0-and-reactor/\"\u003eGetting Started with WebFlux\u003c/a\u003e and \u003ca href=\"https://e4developer.com/posts/introduction-to-concurrency-in-spring-boot/\"\u003eConcurrency in Spring Boot\u003c/a\u003e. One thing that I did not explore enough in these articles were the concurrency implications of building a WebFlux based reactive microservice.\u003c/p\u003e","title":"Spring’s WebFlux / Reactor Parallelism and Backpressure"},{"content":"WebSockets is a technology that enables establishing an interactive connection between the front-end and the service. Being an advanced web technology, it may appear intimidating. In this blog post, I will show you how to easily build a WebSockets enabled service with Java and Javalin.\nA quick intro to WebSockets WebSockets are a relatively new (2011), but a well-supported communication protocol. At the time of writing every major browser supports them.\nWhat is so great about them? If you are looking to build a very interactive application (think Google Docs, chats or games) they are the protocol to choose. You get an open channel of communication, rather than having to rely on the request-response model.\nHow do you initiate a WebSocket connection in JavaScript? It is simple:\nconst ws = new WebSocket(`ws://localhost:7070/some-endpoint/session-id`); And with that connection, you can simply hook to the following events:\nws.onopen event =\u0026gt; {} ws.onmessage = messageEvent =\u0026gt; {} ws.onerror = event =\u0026gt; {} ws.onclose = closeEvent =\u0026gt; {} I will later show you an example of a simple JavaScript frontend that can be built with these events.\nWhy Javalin… What is Javalin anyway? Javalin is an amazing micro-framework for writing microservices. I have chosen it here as it makes writing WebSockets as easy as it gets. If you want to learn more about the framework check out the official site and mine Lightweight Kotlin Microservices with Javalin blog post.\nWebSockets basics with Javalin Working with Javalin and WebSockets is nearly identical to working with JavaScript and WebSockets. The API looks as follows:\napp.ws(\u0026#34;/websocket/:path\u0026#34;, ws -\u0026gt; { ws.onConnect(session -\u0026gt; {}); ws.onMessage((session, message) -\u0026gt; {}); ws.onClose((session, statusCode, reason) -\u0026gt; {}); ws.onError((session, throwable) -\u0026gt; {}); }); With the WsSession object wrapping Jetty’s Session object and adding convenience methods. The most useful one being session.send(“message”). The full list can be found with the official documentation: https://javalin.io/documentation#websockets\nMaking a simple service that receives and sorts a message I wanted to build something very simple but fun to see as an example. I decided for a service that will receive the message from the frontend and then send back progressively more sorted version of the message. You can see the gif below illustrating the idea:\nWriting this with request-response would be quite unpleasant as there would have to be quite a lot of polling involved. Imagine if we were dealing here with a similar blocking request with a chunked response. Real-time analytics streaming perhaps?\nThe service code is very simple with the most difficult part being the actual sorting:\npackage websockets; import io.javalin.Javalin; import io.javalin.embeddedserver.jetty.websocket.WsSession; import java.util.Arrays; import java.util.Map; import java.util.concurrent.ConcurrentHashMap; public class Main { private static Map\u0026lt;String, WsSession\u0026gt; sessions = new ConcurrentHashMap\u0026lt;\u0026gt;(); public static void main(String[] args) { Javalin.create() .port(7070) .enableStaticFiles(\u0026#34;/public\u0026#34;) .ws(\u0026#34;/demo/:session-id\u0026#34;, ws -\u0026gt; { ws.onConnect(session -\u0026gt; { session.send(\u0026#34;Hello Session: \u0026#34;+session.param(\u0026#34;session-id\u0026#34;)); }); ws.onMessage((session, message) -\u0026gt; { String sortedMessage = \u0026#34;\u0026#34;; while(message.length() \u0026gt; 0){ Thread.sleep(50); sortedMessage = sortedMessage + message.substring(0,1); message = message.substring(1); //sorting char[] chars = sortedMessage.toCharArray(); Arrays.sort(chars); sortedMessage = new String(chars).trim(); String response = \u0026#34;message unsorted: \u0026#34; + \u0026#34;\u0026#34;+message+\u0026#34;\\n\u0026#34;+\u0026#34;message sorted: \u0026#34;+sortedMessage; session.send(response); } }); ws.onError(((wsSession, throwable) -\u0026gt; System.out.println(\u0026#34;Something went wrong\u0026#34;) )); ws.onClose((session, status, message) -\u0026gt; { //clean-up }); }) .start(); } } You also need to add the relevant Javalin dependencies:\n\u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.javalin\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;javalin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.6.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.slf4j\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;slf4j-simple\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.7.25\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; On the frontend, we have to deal with the WebSocket appropriately:\n\u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html lang=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;meta charset=\u0026#34;UTF-8\u0026#34;\u0026gt; \u0026lt;title\u0026gt;Javalin WebSockets demo\u0026lt;/title\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;textarea style=\u0026#34;width: 100%; height: 100px\u0026#34; id=\u0026#34;query\u0026#34; placeholder=\u0026#34;Type something ...\u0026#34;\u0026gt;\u0026lt;/textarea\u0026gt; \u0026lt;br\u0026gt; \u0026lt;textarea style=\u0026#34;width: 100%; height: 100px\u0026#34; readonly id=\u0026#34;webanswer\u0026#34;\u0026gt;\u0026lt;/textarea\u0026gt; \u0026lt;script\u0026gt; window.onload = setupWebSocket; function setupWebSocket() { const textQuery = document.querySelector(\u0026#34;#query\u0026#34;); const textAnswer = document.querySelector(\u0026#34;#webanswer\u0026#34;); const ws = new WebSocket(`ws://localhost:7070/demo/session1`); ws.onopen = event =\u0026gt; { console.log(\u0026#39;connection established\u0026#39;); } ws.onmessage = messageEvent =\u0026gt; { textAnswer.value = messageEvent.data; } ws.onerror = event =\u0026gt; { textAnswer.value = \u0026#39;error\u0026#39;; } ws.onclose = closeEvent =\u0026gt; { console.log(\u0026#39;connection closed\u0026#39;); setupWebSocket(); } //Send message on pressing return textQuery.onkeydown = key =\u0026gt; { if(key.keyCode === 13) { ws.send(textQuery.value); textQuery.value = \u0026#39;\u0026#39;; } } } \u0026lt;/script\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; As you can see, you can make use of WebSockets easily and productively when you chose the right tools. I have shared this code on my GitHub account: https://github.com/bjedrzejewski/javalinwebsockets\nMore examples When writing this blog post I was heavily inspired by two great examples available on the Javalin website:\nCreating a simple chat-app with WebSockets\nCreating a Google Docs clone with WebSockets\nMake sure to check them out for more details and inspiration. Also do not forget, that Javalin fully supports Kotlin.\nDo I need to use Javalin when working with WebSockets? You don’t have to use Javalin. You can use Spring, (with this good article by baeldung explaining how to) or multiple other frameworks.\nI have used Javalin here as it provides a very good development experience. Once you understand how to work with WebSockets, you can use them in a less trivial frameworks with confidence.\nConclusion WebSockets are an exciting technology that I think is not used enough. I believe this is partly because many developers are afraid of potential difficulties when developing with WebSockets. I hope this article gave you some more confidence to give WebSockets a try.\n","permalink":"https://e4developer.com/posts/java-websockets-made-simple-with-javalin/","summary":"\u003cp\u003eWebSockets is a technology that enables establishing an interactive connection between the front-end and the service. Being an advanced web technology, it may appear intimidating. In this blog post, I will show you how to easily build a WebSockets enabled service with Java and Javalin.\u003c/p\u003e\n\u003ch3 id=\"a-quick-intro-to-websockets\"\u003eA quick intro to WebSockets\u003c/h3\u003e\n\u003cp\u003eWebSockets are a relatively new (2011), but a well-supported communication protocol. At the time of writing every major browser supports them.\u003c/p\u003e","title":"Java WebSockets made simple with Javalin"},{"content":"Java, Microservices, other JVM languages… Programming, best practices, architecture… Libraries, frameworks, methodologies… I could go on for a while here. Being a passionate and up to date Software Developer is a challenge. How do you stay up to date with all this? I will let you on a well-known secret- use Twitter! Read on to find out what can Twitter help you with and how I use it to stay up to date.\nWhy is Twitter a good tool to keep yourself up to date? There are many ways to stay current- reading books, reading blogs (hey, you are doing that now!), attending conferences and more. I am not suggesting you stop doing any of this. Instead, I am giving you one more, that works really well for me.\nWith Twitter, you get access to curated content. Curated in two ways. First- you chose who to follow, so if someone is constantly posting quality articles and information- you follow them. Second-you get to see which articles and info get the most likes and retweets. That usually correlates with relevance and the level of quality of the articles.\nIf you follow the right people, you will know what is hot in Java. You will see what technologies get talked about the most and what changes may be coming your way. This goes beyond Java as if you are interested in Kotlin, or Spring- you can follow experts in these areas as well.\nAnother thing that I like about Twitter is that it gets personal. Sometimes in a good way. On Twitter, you may find personal opinion and preferences from the key people in the industry. I think this is useful, as genuine excitement can really peak our interest, more than a dry line in a book.\nTweets, being quite small, make it easy to scan for relevant information. I like the fact that people make an effort to make their messages short and to the point. It makes it not much effort to read and decide if you are going to click that linked article.\nWith all these benefits, I think using Twitter for staying up to date with tech is a great idea. If it wasn’t I wouldn’t be writing about it!\nSome say you get as much out of Twitter as you put in… I don’t agree! I think you get much more than you put in with Twitter, but some work is necessary in order for this to work for you. You need to follow the right people.\nWho to follow? If you are new to Twitter, it may be a bit daunting to start. I will give you a list of accounts that I find particularly interesting to follow. If I miss some that you already follow and think are great, please let me know in the comments- I may add them here.\nLanguages related @Java – *This is the official Twitter channel for Java and the source for Java news from the Java community. –*Amazing source of news and articles related to Java development.\n@Kotlin – Statically typed programming language targeting JVM, Android, JavaScript \u0026amp; Native / Sponsored and Developed by @JetBrains – Similar to the Java Twitter account, but about Kotlin. Get ready for the future!\nSpring related @Baeldung – Eugen – Author of http://restwithspring.com and http://learnspringsecurity.com , passionate about REST, Security, TDD and everything in between. – Creator of Baeldung.com – my favorite Java/Spring blog.\n@starbuxman – Josh Long / Spring Dragon – Spring Advocate @Pivotal – One of the most recognized Spring speakers and advocates. A fascinating account to follow!\n@MGrzejszczak – Open source contributor. Author of Mockito books and Applied Continous Delivery course @safari. Making Spring Cloud Contract / Sleuth / Pipelines @Pivotal – Plenty of articles and information from one of the Spring Cloud prolific developers and writers.\n@springunidotcom – Laszlo Csontos – Sharing tutorials, examples and other resources with #Java devs about building #Microservices with #SpringBoot, #SpringCloud and other @SpringCentral projects – One of my favorite sources of Spring related news and articles. Great content curator!\n@springcentral –Spring helps development teams everywhere build simple, portable, fast and flexible JVM-based systems and applications. – The main Spring account, for relevant news and articles.\n@springcloud –Spring cloud news and articles.\nNews and Articles @Dzone – DZone is one of the largest web communities and publishers of technical content about. – Interesting article on all programming related topic.\n@InfoQ – Facilitating the spread of knowledge and innovation in professional software development. – Great selection of articles, similar to Dzone.\n@KentBeck – Programmer, coach coach, singer/guitarist, peripatetic. Learning to be me.– One of the fathers of XP and the Agile movement.\nGreat authors, bloggers and influencers @unclebobmartin – Software Craftsman – Author of many bestsellers and an influential blogger.\n@spolsky – Joel Spolsky – CEO of Stack Overflow, co-founder of Fog Creek Software (Glitch, FogBugz), and creator of Trello. NYC gay techie – Who does not know Stack Overflow? Also, a legendary blogger.\n@RealGeneKim – DevOps enthusiast. Coauthor: DevOps Handbook, The Phoenix Project \u0026amp; Accelerate. Tripwire founder, IT Ops/Security Researcher, Theory of Constraints Jonah. – If you want to know about DevOps, follow Gene!\n@martinfowler – Programmer, Loud Mouth, ThoughtWorker – …also one of the most influential technical architects in the world.\n@yporier – Yolande Poirier – Evangelize @Java Empower developers globally to successfully grow their projects \u0026amp; businesses. On the side, managing @javascript. All opinions are mine. – Runs @Java Twitter account and shares a lot of interesting content with a more personal angle.\n@gayle – Founder/CEO of CareerCup. Ex-Google, Apple, Microsoft developer. Author of Cracking the Coding Interview, Cracking the PM Interview \u0026amp; Cracking the Tech Career. – If you want to nail your technical interview or get better at running them, follow Gayle!\n@patkua – The best account to follow if you want to learn about leading teams and inspiring people. A goldmine of content.\n@simona_cotin – Cloud Developer Advocate @Microsoft – It is worth knowing what Microsoft is up to in the cloud. Simona will keep you up to date on that!\n@samnewman – Independent techie consultant focusing on Microservices, cloud and CD. Wrote Building Microservices. May contain cricket, NRL and board game references. – Author of Building Microservices, an authority on microservices architecture.\n@trisha_gee – Coder/blogger/speaker, working for JetBrains. Human. More or less. – Great content, JetBrains, Java, development related.\n@arungupta – Runner, Author, Father, Husband, Java Champion, J1 Rockstar, JUG Leader, Minecraft, Docker Captain, Devoxx4Kids, DevRel, Work for @awscloud – not many people as passionate about Java and AWS as Arun is that’s for sure!\nFunny @iamdeveloper – The Vista of Twitter accounts – A hilarious parody account to follow!\nFollow me… @e4developer – Husband and a father. Lead developer @Scott_Logic. Microservices evangelist – If you like my articles, you should follow me on Twitter. I share other great content I find as well.\nBut what about this other technology/influencer?… Just follow them! That’s the beauty of Twitter, you can curate your content!\nAlso- if I missed someone that you think is absolutely amazing and should be on that list- please let me know in the comments. I am always looking for new people to follow and I am sure that other readers will appreciate it as well!\nDo I have to start tweeting? One thing that seems to confuse people is the question of whether they should start tweeting as well. The answer is simple- only if you want to!\nSome people retweet and like interesting content as a way to ‘save it’ for themselves. Other ask questions from the people that actually wrote the articles. The chance to ask a question directly to the author of an article is pretty great!\nI started using Twitter fairly passively, but this day I interact with others daily. Find what works for you!\nSummary Twitter is great to stay up to date and learn more about programming. Beyond that, it can be a place to meet like-minded people and find out about conferences.\nI really recommend you give it a go if you are not doing that already. If you are- spread the word and let’s get our colleagues to use the platform and join in the hashtag fueled experience.\n","permalink":"https://e4developer.com/posts/how-to-stay-up-to-date-with-java-and-tech-use-twitter/","summary":"\u003cp\u003eJava, Microservices, other JVM languages… Programming, best practices, architecture… Libraries, frameworks, methodologies… I could go on for a while here. Being a passionate and up to date Software Developer is a challenge. How do you stay up to date with all this? I will let you on a well-known secret- use Twitter! Read on to find out what can Twitter help you with and how I use it to stay up to date.\u003c/p\u003e","title":"How to stay up to date with Java and Tech? Use Twitter!"},{"content":"One thing that differentiates microservices architecture from more traditional, monolithic development styles is the number of choices that have to be made. Which frameworks (if any) are you going to use? How to deal with configuration, orchestration or choreography etc. It may feel overwhelming. In this article, I will give you some advice on how to approach this Architecture of Choices with confidence and success.\nI enjoy having multiple choices and making decisions about the architecture. For some people and projects, this is a scary thing. It does not have to be. With the advice presented here, you can take back control and feel positive about the choices that you get to make.\nKnowing what is possible One thing that scares people away is the notion that they have to know everything, every framework out here. You don’t have to know every framework out there, but it helps to know about them.\nWhat I mean by that, is that you don’t have to know how to use Spring, Microprofile, Vert.X, Dropwizard and more… But it helps if you know about them. What are they, what are the pros and cons?\nWhenever you are facing a choice of a messaging technology, framework, authentication solution- do not try to learn everything, but rather try to learn about everything. What is out there and what it can be used for.\nWord of caution- if you are considering a technology, make sure that you are choosing something at least moderately popular and with some future. You can do a quick google search to see if people are blogging about it, check the number of stars on their GitHub repository etc. Try to avoid very niche projects unless you absolutely know what you are doing.\nBeing open to new ideas With the idea of learning what is possible, it is crucial to keep an open mind. We developers love to get religious about technology. I don’t think it is a road that gets us far.\nI have written previously about the importance of being humble as a software developer. In short- on more than one occasion I dismissed a better solution because I just assumed that I know better. I try to be more open-minded these days.\nWith microservices still being relatively new, people are constantly challenging the status quo and bringing new ideas to the table. Listen to them, you may find a better solution to your problem. Use your choices.\nAvoiding common pitfalls There are many choices how to do something right- there are multiple ways to success. There are also sure ways that will lead you and your projects towards failure.\nComing from other architecture styles, we are prone to committing similar mistakes. I have written a list of common technical debt in microservices based on my previous experience (one of the benefits of being a consultant!).\nWhenever working on a distinct part of your architecture, make sure you familiarise yourself with common pitfalls and best practices. There is often more agreement here than it is with specific choices of technology.\nArchitecture and leadership – shared choices One of the ideas behind microservices is enabling software teams to own the service. That means owning the implementation, testing and often providing support for the service once in production. The DevOps culture in action.\nThat does not mean, that an overall architecture is not required, or that it is only the team’s internal interest. There is enormous benefit from having a thought-off integration between the services. Knowing how the services are supposed to cooperate.\nIf one service uses Kafka, another uses RabbitMQ and the third one is attempting to build something on top of Spring Cloud Data Flow- the chaos will ensue. Some of the choices should be shared choices.\nThis is the place for architects and technical leads to step in (even if they do not have those titles- we know who you are!). If you think that the decision you are about to make will impact numerous teams and services- make it a shared choice.\nI see architects as people predominantly occupied with providing this architectural advice and working it out with the teams. The role of a technical lead is to give visibility to concerns that might not be visible from the outside of the team.\nThe Architecture of choices… and second chances One thing that should make you calmer about making all these choices is that you also get second chances. I have not seen many troubled monoliths successfully transform into great projects… I have seen that with microservices.\nI have seen people make major mistakes about how to handle configurations, change their mind about security and split one service into many others and live to tell the tale. With microservices, the scope is always limited and you do get second chances.\nWith a knowledge that no decision is forever, you can choose something that works now, and in the worst case- replace it later. It really is a game changer when it comes to risk-taking.\nKeep track of your technical debt. With microservices, you will have a fighting chance addressing it.\nMicroservices Blueprints You don’t have to make all the choices. After all, this architecture style is not completely new. You can model your architecture based on successful implementations and microservices blueprints.\nI am a big fan of the Spring ecosystem which provides such blueprint in the form of the Spring Cloud offering. I have reviewed Spring Cloud in the context of a blueprint on this blog.\nIf you are building a system based on a framework, check what others did, plenty of companies, including the microservices famous Netflix are quite open about their journey with microservices. You can model your choices around other’s successes.\nSummary Microservices architecture truly is the Architecture of Choices. Is that bad? In my opinion- no. It is challenging if you are not used to it. With these challenges come opportunities.\nI hope that by reading the advice given here, you will face these choices strategically and with more confidence.\n","permalink":"https://e4developer.com/posts/microservices-the-architecture-of-choices/","summary":"\u003cp\u003eOne thing that differentiates microservices architecture from more traditional, monolithic development styles is the number of choices that have to be made. Which frameworks (if any) are you going to use? How to deal with configuration, orchestration or choreography etc. It may feel overwhelming. In this article, I will give you some advice on how to approach this \u003cem\u003eArchitecture of Choices\u003c/em\u003e with confidence and success.\u003c/p\u003e\n\u003cp\u003eI enjoy having multiple choices and making decisions about the architecture. For some people and projects, this is a scary thing. It does not have to be. With the advice presented here, you can take back control and feel positive about the choices that you get to make.\u003c/p\u003e","title":"Microservices - the Architecture of Choices"},{"content":"I have previously written about Getting Reactive with Spring Boot 2.0 and Reactor, where I have given an introduction to reactive programming in Spring Boot. In this article, I will further explore WebFlux and the ways it impacts servicing client requests- what happens when you return a Flux\u0026lt;\u0026gt;?\nSimple Flux When you write a Controller that returns a list of numbers from your function, you get a list of numbers when you call it. How does it work when you return a Flux like that?:\n@RestController @RequestMapping(\u0026#34;/numbers\u0026#34;) public class NumbersController { @GetMapping(path = \u0026#34;/count/{number}\u0026#34;) public Flux\u0026lt;Integer\u0026gt; countToNumber( @PathVariable(\u0026#34;number\u0026#34;) int number) { return Flux.range(0, number); } } The good news- it works the same as if you were returning a list of numbers. For these statictypes of Flux, where no long-running processing happens and no explicit FluxSink manipulation is performed it is pretty straightforward:\nDynamic Flux What to expect when we are dealing with more dynamic Flux? One where there is a slow running process, based on let’s say, counting up by one every second?\npublic class SlowCounter { private SlowCounter(){} static void count(FluxSink\u0026lt;Integer\u0026gt; sink, int number) { SlowCounterRunnable runnable = new SlowCounterRunnable(sink, number); Thread t = new Thread(runnable); t.start(); } public static class SlowCounterRunnable implements Runnable { FluxSink\u0026lt;Integer\u0026gt; sink; int number; public SlowCounterRunnable(FluxSink\u0026lt;Integer\u0026gt; sink, int number) { this.sink = sink; this.number = number; } public void run() { int count = 0; while (count \u0026lt; number) { try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } sink.next(count); count++; } //Only on complete() is the result sent to the browser sink.complete(); } } } With the attached Controller:\n@GetMapping(path = \u0026#34;/slow_count/{number}\u0026#34;) public Flux\u0026lt;Integer\u0026gt; slowCountToNumber( @PathVariable(\u0026#34;number\u0026#34;) int number) { Flux\u0026lt;Integer\u0026gt; dynamicFlux = Flux.create(sink -\u0026gt; { SlowCounter.count(sink, number); }); return dynamicFlux; } It turns out that we are still dealing with pretty standard HTTP call. One the sink.complete() is called, the list is returned in pretty much the same fashion:\nIt is great that it works so simply- you can start using reactive programming on your server without impacting clients.\nWhat if you want to be more dynamic with your communication? After we have just introduced all this reactivity…\nServer-Sent Events (SSE) based Flux One way to enable a more active channel of communication between your WebFlux service and a client is to make use of Server-Sent Events.\nIf you have not heard of them, they are a way for a web-app to subscribe to a stream of updates generated by a server. If you want a more thorough introduction, there is one titled Stream Updates with Server-Sent Events published on html5rocks.\nHow do you enable SSE in WebFlux? By adding a simple annotation to your controllers method:\n@GetMapping(path = \u0026#34;/stream_count/{number}\u0026#34;, produces=MediaType.TEXT_EVENT_STREAM_VALUE) public Flux\u0026lt;Integer\u0026gt; streamCountToNumber( @PathVariable(\u0026#34;number\u0026#34;) int number) { Flux\u0026lt;Integer\u0026gt; dynamicFlux = Flux.create(sink -\u0026gt; { SlowCounter.count(sink, number); }); return dynamicFlux; } produces=MediaType.TEXT_EVENT_STREAM_VALUE is what enables Spring to turn your method into a source of Server-Sent Events.\nHow does that look in the browser?\nThis is much more interesting with the endpoint working very differently than it in the other cases.\nMaking use of the SSE based Flux I have written a simple \u0026lt;label\u0026gt; that I want o update as the new events are coming in:\nCount: \u0026lt;label id=\u0026#34;count\u0026#34;\u0026gt;\u0026lt;/label\u0026gt; This can be easily done by creating the following (jQuery) based JavaScript:\nfunction createCountSource() { var source = new EventSource(\u0026#34;http://localhost:8080/numbers/stream_count/10\u0026#34;); source.addEventListener(\u0026#39;message\u0026#39;, function (e) { var body = JSON.parse(e.data); $(\u0026#34;#count\u0026#34;).text(body); // You can close the re-connection attempt // if(body === 5) // source.close(); }, false); return source; } $(document).ready(function () { source = createCountSource(); }); As you can see, all you need to do is to subscribe to the newly created EventSource and then react as the events are coming in.\nThe only issue with this code is that the client will automatically try to reconnect after processing all the events. This will result in re-counting the 0-9 numbers.\nIf you want to avoid that behavior, you need to call source.close() at an appropriate moment. That, unfortunately, is not handled very clearly by the server. The other option is using SSE based approach where you want the connection open for the length of the user’s visit on a page.\nWord of warning for the Server-Sent Events Even though Server-Sent Events are not very new, there is not yet full browser support for them… Well… All the major browsers support it except Internet Explorer and Edge. This is quite disappointing as with Edge I was starting to expect Microsoft to step up the game.\nThe other warning comes with the fact that their use is still not common. With that comes less solid support from libraries and less information on the best practices and patterns. I think this will change, but know what you are getting into.\nWhat about WebSockets One exciting way to build Client-Server communication is using WebSockets. I consciously did not include it in this article, as it is quite different from standard HTTP verb based communication.\nRest assured that WebFlux supports WebSockets. Because of the major differences, I decided that I will tackle it in a separate blog post.\nConclusion Basic usage of WebFlux and Flux\u0026lt;\u0026gt; itself is very simple. Your application clients should not see any difference. This enables a smooth transition of multiple applications towards the reactive style.\nWhile Server-Sent Events become more interesting when coupled with Flux\u0026lt;\u0026gt; they are still not as popular or widely supported. If you know what you are doing and you are targetting a known platform, they can be very useful.\nYou can find the example source-code on my GitHub\n","permalink":"https://e4developer.com/posts/webflux-and-servicing-client-requests-how-does-it-work/","summary":"\u003cp\u003eI have previously written about \u003ca href=\"https://e4developer.com/posts/getting-reactive-with-spring-boot-2-0-and-reactor/\"\u003eGetting Reactive with Spring Boot 2.0 and Reactor\u003c/a\u003e, where I have given an introduction to reactive programming in Spring Boot. In this article, I will further explore WebFlux and the ways it impacts servicing client requests- what happens when you return a Flux\u0026lt;\u0026gt;?\u003c/p\u003e\n\u003ch3 id=\"simpleflux\"\u003eSimple Flux\u003cInteger\u003e\u003c/h3\u003e\n\u003cp\u003eWhen you write a Controller that returns a list of numbers from your function, you get a list of numbers when you call it. How does it work when you return a Flux like that?:\u003c/p\u003e","title":"WebFlux and servicing client requests - how does it work?"},{"content":"Reactive programming is gaining a rapid popularity in the JVM community. With Java 9 natively embracing the Reactive Streams and Spring Boot 2.0 including the WebFlux, it is hard to argue with this statement. Spring uses Reactor for its own reactive support and WebFlux relies on that support. In this article, I will show you how to get into reactive programming with Reactor and Spring Boot 2.0.\nWhat is Reactor? Project Reactor is quite well described by the tagline on their official page:\nReactor is a fourth-generation Reactive library for building non-blocking applications on\nthe JVM based on the Reactive Streams Specification\nTo re-phrase, it is a library for building reactive applications on the JVM that is based on the Reactive Streams Specification.\nI have recently blogged about the Reactive Streams native SPI support in Java 9 and as of the time of writing, Reactor does not use that yet. Since the SPI is quite new, I hope Reactor will switch to it in the near future.\nCore ideas behind Reactor As already mentioned, Reactor is based on the Reactive Streams Specification.\nReactor provides two implementations of the Publisher as defined by the specification- Flux and Mono. Understanding of these two concepts is crucial to understanding Reactor. Let’s have a look at the Publisher interface:\npublic interface Publisher\u0026lt;T\u0026gt; { public void subscribe(Subscriber\u0026lt;? super T\u0026gt; s); } If allowing subscribe was all that we could do with Flux and Mono, then they wouldn’t be that impressive. Flux and Mono do much more, but before jumping into examples let’s define them:\nFlux, an Asynchronous Sequence of 0-N Items Mono, an Asynchronous 0-1 Result Mono and Flux explained by example For me it was easiest to understand what Mono and Flex are with a few examples:\nMono and Flux can be used in a static way, either a sequence of 0-1 items (Mono) or 0-N items (Flex):\nMono\u0026lt;String\u0026gt; emptyMono = Mono.empty(); Mono\u0026lt;String\u0026gt; staticMono = Mono.just(\u0026#34;e4developer\u0026#34;); Flux\u0026lt;Integer\u0026gt; emptyFlex = Flux.empty(); Flux\u0026lt;Integer\u0026gt; numbersOneToTen = Flux.range(1, 10); Flux\u0026lt;String\u0026gt; staticFlex = Flux.just(\u0026#34;e4developer\u0026#34;, \u0026#34;reactive\u0026#34;, \u0026#34;reactor\u0026#34;); Mono and Flux values are being processed by subscribing to them:\nwordsFlex .subscribe(word -\u0026gt; System.out.println(word));\nThis snippet will print the values \u0026quot;e4developer\u0026quot;, \u0026quot;reactive\u0026quot;, \u0026quot;reactor\u0026quot; as you would expect when iterating the list. The key rule of Mono and Flux is:\nNothing Happens Until You subscribe()\nMono and Flux can be used in a dynamic way. You can make use of the FluxSink to bind the subscription:\npublic class EventListener { int count = 0; FluxSink\u0026lt;String\u0026gt; sink; void generate() { while (count \u0026lt; 10) { sink.next(\u0026#34;event \u0026#34; + count); count++; } count++; } public void register(FluxSink\u0026lt;String\u0026gt; sink) { this.sink = sink; } } Flux\u0026lt;String\u0026gt; dynamicFlux = Flux.create(sink -\u0026gt; { EventListener eventListener = new EventListener(); eventListener.register(sink); eventListener.generate(); }); dynamicFlux.subscribe(System.out::println); In the code above, the create() method will be called every time new subscription is created. Make sure you are passing listeners here rather than generators.\nFlux and Mono offer many additional features, and if you wish to use them in production, I recommend checking the Reactor core features reference.\nA few words on threading and parallelism in Reactor Threading is an important part of Reactor, as one of the motivations behind Reactive Streams is better utilization of threads.\nIn Reactor you deal with threading by selecting the kind of Scheduler you want to publishOn or subscribeOn:\nFlux.range(1, 100).publishOn(Schedulers.parallel()); You can also make use of Schedulers when building intervals based Flux:\nFlux.interval(Duration.ofMillis(100), Schedulers.newSingle(\u0026#34;dedicated-thread\u0026#34;)); To learn more about Schedulers and different types that you have at your disposal have a look at Reactor reference.\nIt is worth to make clear that using .publishOn(Schedulers.parallel()) will not make your code run in parallel! You are only using a specific Thread pool designed to match your machine available parallelism.\nIf you actually want to run through your Subscription in a parallel fashion you should use the .parallel() method instead:\nFlux.range(1, 1000) .parallel(8) .runOn(Schedulers.parallel()) .subscribe(i -\u0026gt; System.out.println(i)); Make your synchronous calls asynchronous When writing your application in a reactive fashion you want to get rid of blocking synchronous calls. Sometimes, you will have to make such a call (often to external resources). To do that use the following pattern:\nMono wrapBlockingCode = Mono.fromCallable(() -\u0026gt; { return /* blocking synchronous call */ }); wrapBlockingCode = wrapBlockingCode (Schedulers.elastic()); We are making use of elastic Scheduler to create a dedicated Thread as required.\nWhere does Spring 2.0 come in? One of the brand new features in Spring 2.0 is the incorporation of WebFlux. To use it in your project you can simply use the following dependency:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-webflux\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; WebFlux is a vast framework, so I will give you the basics of what it brings:\nIt brings Reactor as a dependency Contains support for reactive HTTP and WebSocket clients Changes the embedded server to reactor-netty as it requires support for the Servlet 3.1 With that you can start writing Controllers that look more like this:\n@RestController public class FeatureController { public FeatureController() { } @GetMapping(\u0026#34;/features\u0026#34;) Flux\u0026lt;String\u0026gt; list() { return Flux.just(\u0026#34;Features 1\u0026#34; , \u0026#34;Features 2\u0026#34; , \u0026#34;Features 3\u0026#34;); } @GetMapping(\u0026#34;/features/{id}\u0026#34;) Mono\u0026lt;String\u0026gt; findById(@PathVariable String id) { return Mono.just(\u0026#34;Features \u0026#34;+id); } } Did you notice Flux and Mono we just discussed? These are the bread and butter of reactive development wtih WebFlux.\nHow to get Reactor without Spring Boot 2.0 or without WebFlux If you are not yet using Spring Boot 2.0, or you want only parts of your application to be reactive, you can bring the Reactor on its own by adding the following BOM:\n\u0026lt;dependencyManagement\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.projectreactor\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;reactor-bom\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;Bismuth-RELEASE\u0026lt;/version\u0026gt; \u0026lt;type\u0026gt;pom\u0026lt;/type\u0026gt; \u0026lt;scope\u0026gt;import\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;/dependencyManagement\u0026gt; and the depndency:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.projectreactor\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;reactor-core\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; Summary Reactive programming and Reactive Streams bring a new style of programming to the server-side. Asynchronous, non-blocking processing brings plenty of benefits but can be challenging. I hope that after reading this article you are ready to start exploring the use of Reactor or even WebFlux in your own project. I believe that this is just the beginning of the reactive revolution!\n","permalink":"https://e4developer.com/posts/getting-reactive-with-spring-boot-2-0-and-reactor/","summary":"\u003cp\u003eReactive programming is gaining a rapid popularity in the JVM community. With \u003ca href=\"https://e4developer.com/posts/reactive-streams-in-java-introducing-the-new-spi/\"\u003eJava 9 natively embracing the Reactive Streams\u003c/a\u003e and Spring Boot 2.0 including the WebFlux, it is hard to argue with this statement. Spring uses Reactor for its own reactive support and WebFlux relies on that support. In this article, I will show you how to get into reactive programming with Reactor and Spring Boot 2.0.\u003c/p\u003e\n\u003ch3 id=\"what-is-reactor\"\u003eWhat is Reactor?\u003c/h3\u003e\n\u003cp\u003eProject Reactor is quite well described by the tagline on their \u003ca href=\"https://projectreactor.io/\"\u003eofficial page\u003c/a\u003e:\u003c/p\u003e","title":"Getting Reactive with Spring Boot 2.0 and Reactor"},{"content":"Thank you for reading my newsletter. I have sent the previous newsletter on the 1st of March and I decided to send these on a monthly basis. Expect the next one in May.\nI have been busy looking into different areas of software development, including Spring, Kotlin, and DevOps. I have been particularly interested in the idea of reactive-microservices and Reactive Streams. I think these will impact Java development in a significant way. Check Reactive Streams in Java – introducing the new SPI to see how Java 9 is already embracing these ideas.\nI have also spoken about being a Lead Developer in Being a Lead Developer on the Road – Presentation. If you are interested in having me speak at your event, please get in touch via tweeteror my contact form.\nNew Articles Personal and practical:\nThe importance of being humble as a software developer\nProductivityadvice for developers and development teams\nKeeping your coding skills sharp with HackerRank\nSpring Centered:\nIntroducing JSON-B with Spring Boot 2.0\nIntroduction to Concurrency in Spring Boot\nHow to learn Spring Cloud – the practical way\nKotlin Microservices:\nLightweight Kotlin Microservices with Javalin\nDevops:\nThe Phoenix Project – a key to understanding DevOps\nCore Java and Development:\nCQRS – a simple explanation\nReactive Streams in Java – introducing the new SPI\nPublic Speaking:\nBeing a Lead Developer on the Road – Presentation\n","permalink":"https://e4developer.com/posts/e4developer-newsletter-april-2018-number-2/","summary":"\u003cp\u003eThank you for reading my newsletter. I have sent the \u003ca href=\"https://e4developer.com/posts/e4developer-newsletter-february-2018-number-1/\"\u003eprevious newsletter\u003c/a\u003e on the 1st of March and I decided to send these on a monthly basis. Expect the next one in May.\u003c/p\u003e\n\u003cp\u003eI have been busy looking into different areas of software development, including Spring, Kotlin, and DevOps. I have been particularly interested in the idea of reactive-microservices and Reactive Streams. I think these will impact Java development in a significant way. Check \u003ca href=\"https://e4developer.com/posts/reactive-streams-in-java-introducing-the-new-spi/\"\u003eReactive Streams in Java – introducing the new SPI\u003c/a\u003e to see how Java 9 is already embracing these ideas.\u003c/p\u003e","title":"E4developer Newsletter - April 2018 - Number 2"},{"content":"Getting work done effectively and efficiently is a goal of most software development teams. On a personal level, being able to get a productive day at work can also be immensely satisfying. In this article, I will share with you my advice on how to be much more productive. This advice is inspired by “The 7 Habits of Highly Effective People” – a book that made a big impact on me.\nI believe one of the keys to increased productivity is to concentrate on doing the right things at work. How do you know what are the right things? To answer that question, I will use The Management Matrix as described in “The 7 Habits of Highly Effective People”.\nThe Management Matrix We can classify tasks that you do in the following ways:\nImportant Work– This brings us closer to achieving our goals Not Important Work– This does not bring us closer to achieving our goals Urgent Work– There is a pressure to do it right now Not Urgent Work– There is no pressure to do it right now We can arrange these categories into four squares that create The Management Matrix:\nEach task we do will belong to one of these four quadrants.\nThe Four Quadrants for Software Developers Looking at software development, let’s see which activities belong to which quadrant:\nQuadrant 1 – Urgent and Important These are the things that are important and require our immediate attention:\nProblems in production – we need to fix them right now. These can include bugs, security issues, infrastructure problems. Important deadlines – they take priorities. These are the things that we have obligation to deliver. Especially as the deadlines get closer and we are not finished yet. C****rises– of different nature. These are the things that happen and need our immediate response. Important management meetings to which we may be called etc. Quadrant 2 – Not Urgent and Important These are the things that are important and do not require our immediate attention:\nAdding quality tests– unit tests, integration tests, anything really. Servicing our technical debt – as identified by the team or architecture. Building relations with other teams – getting to know better testers, operations etc. Improving and creating documentation – making it easier for developers and others to work with the code and software that is created. Learning new technologies and upskilling the team Other non-pressing, improvement related work Quadrant 3 – Urgent and Not Important These are the things that are not important and require our immediate attention:\nInterruptions to development work that is trivia related Dealing with reports that do not get read or used Unproductive meetings that we are asked to attend Solving crises unrelated to our goals Visible, popular activities that do not bring us closer to our goal Quadrant 4 – Non Urgent and Not Important These are the things that are not important and do not require our immediate attention:\nGold-plating code, looking for busy work to keep occupied Attending unimportant optional meetings General time wasting Pleasant, but important activities What to work on then? Quadrant 1 – Urgent and Important will always take the priority. There is no escaping from the fact that you have to deal with the urgent and important matters. Sometimes it may feel like this is all that we do!\nHow do you get more productive then? As you might have figured, most of the helpful activities are in the Quadrant 2 – Not Urgent and Important. Clearly, these are the things that are worth doing. Often these activities (better testing, documentation, higher quality) dramatically reduce the number of Quadrant 1 activities required.\nThe key to unlocking productivity and effectiveness can be summed up in two rules:\nQuadrant 2 activities are the key to gaining control, reducing crises and increasing productivity. You can only get time for Quadrant 2 activities by reducing time wasted on Quadrand 3 and 4 activiteis. Things you can start doing now The good news is that you don’t have to wait for anything to start getting the control of your work-life back. I suggest that you start doing the following:\nIdentify to which quadrant different task that you or your team performs belong. For tasks that belong to the quadrants 3 and 4- stop doing them. Learn how to say no and cut the unimportant things as effectively as you can. The time that you and your team saved by cutting the quadrants 3 and 4 work can be invested in quadrant 2 activities. As time progresses you will see that there is less and less quadrant 1 items as you are gaining back control. Make sure to re-invest this time in quadrant 2 activities. Summary Being effective at work is as much about not doing the wasteful tasks as it is about doing the right ones. People often overemphasize being busy at work, without spending enough time wondering if they are busy with the right things. I hope this framework will help you get better at this important activity. If you enjoyed this article, I recommend reading “The 7 Habits of Highly Effective People”, it is one of these books that has a potential of leaving a mark on the rest of your life.\n","permalink":"https://e4developer.com/posts/productivity-advice-for-developers-and-development-teams/","summary":"\u003cp\u003eGetting work done effectively and efficiently is a goal of most software development teams. On a personal level, being able to get a productive day at work can also be immensely satisfying. In this article, I will share with you my advice on how to be much more productive. This advice is inspired by “The 7 Habits of Highly Effective People” – a book that made a big impact on me.\u003c/p\u003e","title":"Productivity advice for developers and development teams"},{"content":"One of the new features of Java 9 is the introduction of the Reactive Streams SPI to the JDK. Reactive programming keeps gaining in popularity, mainly because it works well. If you are not familiar with the principles, I recommend checking out The Reactive Manifesto to which I subscribe. To learn more about Reactive Streams in Java, read on.\nReactive Streams got introduced to Java as java.util.concurrent.Flow. Before looking into that, let’s see what are Reactive Streams and how can we make use of them.\nIntroducing the idea of Reactive Streams The original initiative for the introduction of Reactive Streams can be found at http://www.reactive-streams.org/. To quote from their GitHub project page:\nThe purpose of Reactive Streams is to provide a standard for asynchronous stream processing with non-blocking backpressure.\nTo fully understand that quote, let’s look at these two concepts here:\nasynchronous stream processing – This means processing data streams with parallel use of computing resources on a single machine. Streams can consist of live data of non-predetermined size and this is where the difficulty lays. non-blocking backpressure – When exchanging data across an asynchronous boundary, you force the other side to deal with the data. This is called backpressure. Think of it as a flow-control. Care needs to be taken for this part of the stream management to be asynchronous as well (non-blocking). With these ideas discussed, we can describe Reactive Streams as a specification for Stream oriented JVM libraries that:\nsequentially process a potentially unbounded number of elements asynchronously pass elements between components include mandatory non-blocking backpressure Reactive Streams Specification The actual Reactive Streams specification can be found here: https://github.com/reactive-streams/reactive-streams-jvm#specification\nIt consists of four main components:\nPublisher Subscriber Subscription Processor And a list of rules for each of the components.\nJava 9 Reactive Streams Where does Java come in here? With the introduction of java.util.concurrent.Flow JDK now includes an SPI (Service Provider Interface) that will guide implementations of the Reactive Streams.\nIt is important to note that this is not meant to be the client API. Developers are not expected to be directly using thejava.util.concurrent.Flow, implementing its different interfaces. The goal here is to guide other implementations of Reactive Streams that will be able to seamlessly interpolate in Java.\nJava 9 Reactive Streams SPI Let’s have a closer look at the different Interfacesces that make up the Reactive Streams SPI in Java:\nFlow.Publisher public interface Publisher\u0026lt;T\u0026gt; { public void subscribe(Subscriber\u0026lt;? super T\u0026gt; s); } Flow.Subscriber public interface Subscriber\u0026lt;T\u0026gt; { public void onSubscribe(Subscription s); public void onNext(T t); public void onError(Throwable t); public void onComplete(); } Flow.Subscription public interface Subscription { public void request(long n); public void cancel(); } Flow.Processor\u0026lt;T,R\u0026gt; public interface Processor\u0026lt;T, R\u0026gt; extends Subscriber\u0026lt;T\u0026gt;, Publisher\u0026lt;R\u0026gt; { } Reactive Streams Technology Compatibility Kit Implementing these Interfaces is not enough to create a correct Reactive Streams implementation. As mentioned before there are the components and rules that make up the specification. The components are defined by the Interfaces while the rules are defined by Reactive Streams Technology Compatibility Kit (TCK).\nTCK is a set of tests designed to cover all the rules mentioned in the specification. In order to correctly implement reactive streams, you have to implement the interfaces and make sure that your implementation passes the TCK tests.\nTo find out more about TCK, check out the project README file.\nWhat are some of the implementation of Reactive Streams in Java? As mentioned, this SPI is not meant to be used by end users. It is a tool for correctly implementing Reactive Streams. If you want to actually start using them, you have quite a few implementations to choose from:\nProject Reactor – “Reactor is a fourth-generation Reactive library for building non-blocking applications on\nthe JVM based on the Reactive Streams Specification” RxJava – “RxJava is a Java VM implementation of Reactive Extensions: a library for composing asynchronous and event-based programs by using observable sequences.” Vert.x Reactive Streams Integration – a reactive microservices library Akka Streams – Reactive Streams implementation in Akka Framework Spring Boot 2 uses Reactor to provide Reactive Streams. Some of these frameworks do not use the Java 9 streams yet, rather they rely on the original reactive-streams project. Since there is a nearly 1-1 mapping between the two, most of them are in the process of moving to java.util.concurrent.Flow.\nSummary Reactive Streams is not a new idea. It has already gained enough popularity to be included in the JDK as an SPI. With that, we can expect only further grow of reactive libraries and reactive programming on the JVM. If you have not done that already, read the Reactive Manifesto and get for the Reactive future- it has already begun.\n","permalink":"https://e4developer.com/posts/reactive-streams-in-java-introducing-the-new-spi/","summary":"\u003cp\u003eOne of the new features of Java 9 is the introduction of the Reactive Streams SPI to the JDK. Reactive programming keeps gaining in popularity, mainly because it works well. If you are not familiar with the principles, I recommend checking out \u003ca href=\"https://www.reactivemanifesto.org/\"\u003eThe Reactive Manifesto\u003c/a\u003e to which I subscribe. To learn more about Reactive Streams in Java, read on.\u003c/p\u003e\n\u003cp\u003eReactive Streams got introduced to Java as \u003ccode\u003ejava.util.concurrent.Flow\u003c/code\u003e. Before looking into that, let’s see what are Reactive Streams and how can we make use of them.\u003c/p\u003e","title":"Reactive Streams in Java - introducing the new SPI"},{"content":"I have recently been thinking about the importance of humility for software developers. I feel that the more I learn about building software the humbler I become, knowing I do not have all the answers. This attitude helped me a lot in my life as a software developer…\nI was OOP expert after 3 years of study or so I thought… I remember thinking that I knew Java and Object Oriented Programming pretty well when I was finishing my Bachelor degree. Why would I not think that? I had great marks, aced all the classes, read a couple of books and in general thought like I had a good handle on this Java and OOP thinking.\nDuring my Master degree, I attended a class about Object Oriented Programming in Java. I enjoyed the subject and I thought it will be a fun and easy class… The class was fun, but it was far from easy! I realized how much more there is to the subject and how much more there is to learn…\nFast forward a few years- I ended up reading about GRASP (General responsibility assignment software patterns) and once again felt like there was a breakthrough in my understanding…\nThen came the Domain Driven Development and Design… Do I need to say more?\nNow I know much more about OOP and Java than I did after finishing my Bachelor degree. I also know that there is more to learn both in terms of knowledge and in terms of skill.\nBeing humble is being open to learning more The main point I was trying to illustrate with this story is the amount of learning that happened after the perceived mastery. I am not saying here that I was not competent in the topic after the studies, but that there was so much more to it.\nWhen you have the realistic view of what it takes to really master a subject, you will not stop learning. In many disciplines, especially in an ever-changing field like software development- full master is impossible. New things are introduced and added to the field constantly.\nTo put the advice I am giving here into one sentence: Be humble about your own knowledge.\nThere is more left to learn than you may realize, often you won’t know what you don’t know.\nI knew that JavaEE was the best way to build enterprise software or so I thought… When I started working with enterprise software I was working with the JavaEE and JBoss. Back then I was quite convinced that the latest version of Java with JBoss server is the way to go.\nI heard about Spring Framework but given that I was working with (in my opinion superior) JavaEE, I was quite happy to dismiss it. After all what good comes from learning frameworks that are not based on “standards“?\nIf you read my blog, you will realize that I am currently working with Spring extensively and I am loving it. It is a great framework and for most cases, I actually prefer it to JavaEE.\nDon’t fool yourself into believing that your tool is the best, just because you know it For me, this JavaEE to Spring move was just one of many times where I realized that I was fooling myself that I know the answer.\nThis is not as much about overestimating your knowledge in one area, as it is about being open to different, new ideas. Maybe there is a reason why someone prefers a different language than you do?\nThere is a short and very good article called Give it five minutes where Jason Fried describes the moment where he learned a similar lesson. In brief- when someone invests large amounts of time working on something and believing in an idea, it is unwise to dismiss it straight away, simply because you originally had something else in mind. Give it five minutes. You may learn a lot if you open your mind a little.\nTo remember this advice, let’s summarise it- Give it five minutes, keep your mind ready to change.\nI know how this process should work, what can he possibly teach me? I have been in multiple situations where I would be discussing with someone how to improve a software development process in a project I was part of at that time. Very often I would start “listening” just preparing my responses. Of course, I was right (I would think), what can that person teach me?\nWhen I look at these moments now, I think about the wasted time. So many times, the answer to a problem was right there, but I would not listen- I would focus predominantly on my own responses…\nListen to others, genuinely trying to understand When I started listening to people, my life as a software developer became much easier. In fact, it made such a difference that I list it as the first Soft Skill for Software Developers in my article I wrote for the Scott Logic Blog.\nYou need a certain level of humility when listening to others. You need to be open to understanding and even open to changing your own mind.\nIf you listen with this deep intention to understand, you will not only understand better but the person that you are listening to, may open up more and give you a better picture. Once they know that you understood them, they may even become open to change.\nThis idea of understanding others as a key to working with people is explored at length in The 7 Habits of Highly Effective People– a book that I can’t recommend highly enough if you want to learn how to work better with others.\nThis advice can be summarised as- Listen to others trying to understand them first.\nThe balance between humility and confidence – beat the impostor syndrome! I want to make it absolutely clear here- I am not advising you to develop an impostor syndrome! If you have not heard of the term, here is the Wikipedia definition:\na concept describing individuals who are marked by an inability to internalize their accomplishments and have a persistent fear of being exposed as a “fraud”\nI was not naturally a humble person. While some people suffer from impostor syndrome, other may be blinded by an illusion of mastery. Both are bad and both can be damaging.\nIf you feel like an impostor, I am not telling you that you are validated in your thinking. All I am saying is that there is power in keeping your mind open, being realistic about the amount of knowledge out there and deeply listening to others. There is also power in being confident and not stressing too much about being a “fraud” or not. If that idea crossed your mind, you are likely better than many self-validated “masters” out there.\nRealizing how vast the field is and how impossible is to master it all may help you fight that syndrome. The fact that you feel like there is so much to learn is good- you are ahead of the game compared to those who think that they mastered it all.\nYou are not a “fraud” because you don’t know everything. Nobody does.\nSummary The lessons I am talking about here made a big impact on the way I see things and the way I work. Keeping these ideas in mind and staying a bit humble made my life much easier. In summary, my advice is:\nBe humble about your own knowledge. Give it five minutes, keep your mind ready to change. Listen to others trying to understand them first. These are the simple rules that helped me a lot. I hope they can help you as well. Keep your mind open and don’t stop improving.\n","permalink":"https://e4developer.com/posts/the-importance-of-being-humble-as-a-software-developer/","summary":"\u003cp\u003eI have recently been thinking about the importance of humility for software developers. I feel that the more I learn about building software the humbler I become, knowing I do not have all the answers. This attitude helped me a lot in my life as a software developer…\u003c/p\u003e\n\u003ch3 id=\"i-was-oop-expert-after-3-years-of-study-or-so-i-thought\"\u003eI was OOP expert after 3 years of study or so I thought…\u003c/h3\u003e\n\u003cp\u003eI remember thinking that I knew Java and Object Oriented Programming pretty well when I was finishing my Bachelor degree. Why would I not think that? I had great marks, aced all the classes, read a couple of books and in general thought like I had a good handle on this Java and OOP thinking.\u003c/p\u003e","title":"The importance of being humble as a software developer"},{"content":"When building services with Spring Boot we have to deal with concurrency. There is this misconception that because of using Servlets and getting a new Thread allocated per request there is no need to think about concurrency. In this article, I will give some practical advice on dealing with multi-threading in Spring Boot and how to avoid problems it can create.\nSpring Boot Concurrency Basics The key areas worth considering when thinking about concurrency in Spring Boot applications are:\n**Maximum number of threads –**This is the maximum number of threads that are allocated for dealing with requests to the application Shared external resources– Calls to external shared resources such as databases Asynchronous method calls– These are method calls that release the thread back to the thread-pool when waiting for a response Shared internal resources– Calls to internal shared resources- such as caches and potentially shared application state We will look at them one after another and see how they can impact the way we write applications with Spring Boot.\nMaximum number of threads in Spring Boot Application The first thing to be aware is that you are dealing with a limited number of threads.\nIf you are using Tomcat as your embedded server (default), then you can use the property server.tomcat.max-threads to control how many threads you want to allow. This is set to 0 by default which means- use the Tomcat default which is 200.\nIt is important to know this, as you may need to scale this number to work effectively with the resources that the service is given. It can also become problematic when dealing with external resources…\nThe problem with shared external resources Calling databases and other REST endpoints can take significant time.\nThe limited number of threads that you are dealing with means that you really want to avoid long-running, slow, synchronous requests. If you are waiting for some slow process to complete and holding the thread, you are potentially under-utilizing your server.\nIf you have many long-running threads that are waiting for responses, you may essentially end up with a situation where really fast, simple requests are waiting for long, “forever-waiting” requests to terminate.\nHow can this be improved?\nAsynchronous method calls to the rescue It often helps to request for multiple things at once. Ideally, if you need to call three services: Service A, Service B, and Service C; you don’t want to do that:\nCall Service A Wait for a response from Service A Call Service B Wait for a response from Service B Call Service C Wait for a response from Service C Compose responses from A, B and C and finish the processing If each service takes 3 seconds to respond, the whole process would take 9 seconds. It is much better to do the following:\nCall Service A Call Service B Call Service C Wait for responses from Service A, B, and C Compose responses from A, B and C and finish the processing In this case, you make al three calls without waiting for completion and assuming that services A, B, and C, are not dependent on one another, it takes 3 seconds to respond.\nThe idea of asynchronous and reactive microservices is interesting in itself. I recommend checking out:\nThe reactive section of this blog, especially Getting Reactive with Spring Boot 2.0 and Reactor The reactive manifesto Spring Boot 2 and WebFlux Project Reactor by Pivotal Eclipse Vert.X – reactive microservices ReactiveX (RxJava) These are all fascinating, but we are focusing on Spring Boot in this article…\nMaking asynchronous calls in Spring Boot How do you enable asynchronous method calls in Spring Boot? You want to start with @EnableAsync annotation on your Application class under the @SpringBootApplication annotation.\nWith that enabled, you can use @Async annotation in your services that return CompletableFuture\u0026lt;\u0026gt;. Because you have @EnableAsync , the @Async methods will be run in a background thread pool.\nIf you make a good use of the asynchronous execution, you will avoid many unnecessary dips in performance, making your service as fast and responsive as it is possible.\nFor the deatils of implementing this in Spring Boot I really recommend checking out the example from the official Spring website.\nShared internal resources While the previous sections deal with things we often have no control over- external resources, we are in full control of the internal resources of the system.\nKnowing that we control the internal resources, the best advice in avoiding issues related to sharing them, is not to share them!\nSpring Services and Controllers are Singletons by default. It is important to be aware of that and be very careful. The moment there is a mutable state in your Service, you need to deal with it as you would in any standard application.\nOther potential sources of the shared state are caches and custom, server-wide components (often monitoring, security etc.).\nIf you absolutely need to share some state, here is my advice:\nDeal with immutable objects. You avoid many concurrency related issues if your objects are immutable. If you need to change something- just create a new object. Know your Collections. Not all collections are Thread-Safe. A common pitfall is using HashMap assuming that it is Thread-Safe (It is not. If you need concurrent access use ConcurrentHashMap, HashTable or another thread-safe solution.). Do not assume third-party libraries are thread-safe. Most code is not, and access to the shared state has to be controlled. If you are going to rely on it- learn proper concurrency. I really recommend getting a copy of Java Concurrency in Practice. Written in 2006, but still very relevant in 2018. Summary Concurrency and Multi-Threading in Spring are big and important topics. In this article, I wanted to highlight the key areas that you need to be aware of when writing Spring Boot applications. If you want to be successful when building high-demand, high-quality services, you need to make conscious decisions and trade-offs around this topics. I hope that with this article you know how to get started.\n","permalink":"https://e4developer.com/posts/introduction-to-concurrency-in-spring-boot/","summary":"\u003cp\u003eWhen building services with Spring Boot we have to deal with concurrency. There is this misconception that because of using Servlets and getting a new Thread allocated per request there is no need to think about concurrency. In this article, I will give some practical advice on dealing with multi-threading in Spring Boot and how to avoid problems it can create.\u003c/p\u003e\n\u003ch2 id=\"spring-boot-concurrency-basics\"\u003eSpring Boot Concurrency Basics\u003c/h2\u003e\n\u003cp\u003eThe key areas worth considering when thinking about concurrency in Spring Boot applications are:\u003c/p\u003e","title":"Introduction to Concurrency in Spring Boot"},{"content":"Every now and then you read a book that completely changes how you understand something. “The Phoenix Project” (Amazon) changed how I understand DevOps and driving positive transformations in companies. Keep reading to see why I think everyone working in the software industry should read this book.\nThe DevOps revolution I believe that as of 2018 the whole software industry is going through a DevOps revolution, similar to the Agile revolution that already took place.\nIf your organization already embraces DevOps, then congratulations- you are ahead of the curve. As a consultant though, I am seeing a large number of our clients make their first steps on the DevOps journey. I have previously written an article titled An Organization’s Journey to a DevOps Mindset and Culture that talks about some of these experiences.\nBased on the feedback from a number of organizations that already adopted DevOps, or are just starting, this is often the most beneficial IT transformation that they ever attempted. Some companies were so impressed that they even let Scott Logic publish a case study on the topic.\nDespite all that success, excitement and the revolution going on, there are still many people who don’t really get it. Why is DevOps important, why is it so successful, why is it such a big deal? They don’t want a dry handbook, they want a good, engaging story explaining it all…\nThe Phoenix Project The Phoenix Project is a novel. It is the first time that I saw a major technical idea presented as an engaging novel, rather than a handbook, a guide or a list of tutorials.\nThe book follows a story of Bill, an IT manager that has to save his company from a collapse. In the process, he gets in touch with other managers and together they embark (unknowingly) on a DevOps transformation journey.\nThe book succeeds in a few ways:\nIt is very readable, it is actually fun to read. I read it in a few short days as I found it difficult to put down. This is incredible when compared to most traditional manuals or guides. This is a book about the journey, about the transformation. You can see the dire state the company is in the beginning (I am sure many readers will recognize their own issues there) and how it all ends. It brings hope, making it clear that most companies start from a pretty difficult place. It is really good in illustrating common problems and people you are likely to deal with real life. It is actually quite scary how closely some of the characters matched to people I met during my career. That helps you relate and see how things can be fixed. The core ideas are illustrated clearly and in details. You can see the principles and their applications. You also see how the main hero discovers the core concepts, making them more memorable. This is a fun read for those working in Software and IT while teaching important lessons. What more can you ask for?\nThe core concepts explored in The Phoenix Project The key concepts that really stood out in the book are The Three Ways and The Four Types of Work. I will mention these briefly here as I don’t think it will spoil your reading:\nThe Three Ways Flow: This is the flow of work going from Development to Operations to the customer. Maximizing that flow is one of the keys to success. The practices included in Flow are continuous integration and deployment, limiting work in progress, creating environments on demand, automation etc. Feedback: This is the flow of fast feedback. Identifying problems as quickly as possible. Creating quality and knowledge. Practices helping feedback are automated test suites, builds failing in the deployment pipeline, monitoring etc. Continual Experimentation and Learning: This is about creating a culture that fosters continual experimentation and learning. This is required to get Flow and Feedback, but also to maintain them. Activities that can help here are: creating a culture of innovation, building trust, allocating at least 20% of Dev and Ops time to non-functional requirements etc. There is more to each of these practices and the book does a great job at exploring them in depth.\nI have published a blog post about making agile software teams more productive from the inside based on these ideas. I really like how The Three Ways apply both to individual teams and the way they co-operate.\nThe Four Types of Work Making work visible is very important. Without transparency, it is difficult to get a grip on what is really happening and where the time is spent. The four types of IT work described in the book are:\nBusiness Projects – Business initiatives, most of the development work. Internal IT projects – Infrastructure and IT Operations. Creating new environments, automating things etc. Often not tracked properly. These create problems when Operations are already under stress. Updates and Changes – Often generated from the two previous types of work. Updating and changing different systems. Unplanned work or recovery work – Incidents and problems generated by other work. These make it harder to do the planned work. The Phoenix Project explores these different kinds of works and investigates their impact on the Operations team and the wider organization.\nFinal Words The Phoenix Project is a fascinating book and a great first step on your DevOps journey. I really recommend that you read it if you have not done that already and share it with the team. Once you get the ideas and you are in a need of a handbook rather than a novel- there is a highly recommended Devops Handbook (Amazon). Good luck on your DevOps journey.\n","permalink":"https://e4developer.com/posts/the-phoenix-project-a-key-to-understanding-devops/","summary":"\u003cp\u003eEvery now and then you read a book that completely changes how you understand something. \u003ca href=\"https://www.amazon.com/gp/product/1942788290/ref=as_li_tl?ie=UTF8\u0026amp;tag=e4developer01-20\u0026amp;camp=1789\u0026amp;creative=9325\u0026amp;linkCode=as2\u0026amp;creativeASIN=1942788290\u0026amp;linkId=63905929a6dcbb90b2a2e97207c06361\"\u003e“The Phoenix Project” (Amazon)\u003c/a\u003e changed how I understand DevOps and driving positive transformations in companies. Keep reading to see why I think everyone working in the software industry should read this book.\u003c/p\u003e\n\u003ch2 id=\"the-devops-revolution\"\u003eThe DevOps revolution\u003c/h2\u003e\n\u003cp\u003eI believe that as of 2018 the whole software industry is going through a DevOps revolution, similar to the Agile revolution that already took place.\u003c/p\u003e","title":"The Phoenix Project - a key to understanding DevOps"},{"content":"There is a lot happening in the JVM space when it comes to microservices development. You have Spring Cloud thriving, Microprofile entering the stage, Vert.x letting you get reactive and Dropwizard being actively developed. What if you want something really simple though? And maybe with some Kotlin? For that you have Javalin!\nIntroducing Javalin Javalin is a micro-framework for building simple REST APIs for Java and Kotlin. It comes with embedded Jetty server and is very easy to use.\nThis simplicity makes Javalin a very enjoyable framework for learning Kotlin. If you are new to Kotlin, the last thing you want to do is to be overwhelmed by a new opinionated framework. You want to focus on the core language features.\nHow do you get started with Javalin then? You need to add a relevant dependency:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.javalin\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;javalin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.6.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; And you can start your service development. The Kotlin “Hello World” example is as simple as:\nimport io.javalin.Javalin; public class HelloWorld { public static void main(String[] args) { Javalin app = Javalin.start(7000); app.get(\u0026#34;/\u0026#34;, ctx -\u0026gt; ctx.result(\u0026#34;Hello World\u0026#34;)); } } Isn’t that great? With a single dependency and just a few lines of Kotlin code, you have a running “Hello World” service on the port 7000.\nBuilding a REST API with Javalin At the core of Javalin lies the idea of using handlers. There are three main handlers types:\nBefore-handlers: these are matched before every request Endpoint-handlers: for dealing with specific endpoints After-handlers: these are run after every request, even if Exception occurred Let’s look at some code.\nTo print all the headers for every request to an /example endpoint:\napp.before(\u0026#34;/example\u0026#34;) { ctx -\u0026gt; println(ctx.headerMap()) } To implement a GET endpoint that will return “Hello World” to the caller:\nget(\u0026#34;/example\u0026#34;) { ctx -\u0026gt; ctx.result(\u0026#34;Hello World\u0026#34;) } To write “Goodbye!” to the console after \\example gets called:\napp.after(\u0026#34;/example\u0026#34;) { ctx -\u0026gt; println(\u0026#34;Good bye!\u0026#34;) } You can group these handlers easily in handlers groups for clearer typing.\nThe one thing that really stands out here is the ultimate simplicity! You can do things so naturally, it makes Javalin feel like the NodeJS of the JVM.\nOther core ideas in Javalin Javalin is a micro-framework, so the focus is on keeping things light (in my play-time with the framework is was consistently starting under 1 second).\nWith that lightness in mind, there are only a few other concepts that are part of this micro-framework:\nContext (ctx) – You have seen that in action when printing headers. This is everything you need to handle http-requests. Access Manager – It helps you implementing per-endpoint authentication and authorization. Exception and Error Mapping – These help you deal with your Exceptions on the top-level. Lifecycle Events – If you need to hook to SERVER_STARTING, SERVER_STARTED and similar events. Server Setup – For setting up the embedded Jetty Once again- the simplicity and clarity of these are exemplary. You really get all the basics necessary and the rest is left for you to deal with.\nPackaging an executable JAR One thing that could be improved upon in the framework is dealing with the creation of executable JAR. At the moment you have to do it yourself by adding appropriate Maven plugin. I have used the following:\n\u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.apache.maven.plugins\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;maven-assembly-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;executions\u0026gt; \u0026lt;execution\u0026gt; \u0026lt;goals\u0026gt; \u0026lt;goal\u0026gt;attached\u0026lt;/goal\u0026gt; \u0026lt;/goals\u0026gt; \u0026lt;phase\u0026gt;package\u0026lt;/phase\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;descriptorRefs\u0026gt; \u0026lt;descriptorRef\u0026gt;jar-with-dependencies\u0026lt;/descriptorRef\u0026gt; \u0026lt;/descriptorRefs\u0026gt; \u0026lt;archive\u0026gt; \u0026lt;manifest\u0026gt; \u0026lt;mainClass\u0026gt;com.e4developer.MainKt\u0026lt;/mainClass\u0026gt; \u0026lt;/manifest\u0026gt; \u0026lt;/archive\u0026gt; \u0026lt;/configuration\u0026gt; \u0026lt;/execution\u0026gt; \u0026lt;/executions\u0026gt; \u0026lt;/plugin\u0026gt; I think it would be better if Javalin dealt with this problem in a similar fashion to Spring Boot, where this is taken care of for the developer.\nUpdate: I got a response from Javalin Twitter Account on this subject: ”(…) The reason why Javalin doesn’t concern itself with Jar creation is because it’s not strictly related to Javalin. If people learn how to do it the Maven/Gradle way, this knowledge will be useful for them in future (non Javalin) projects.” – This highlights the focus and the philosophy behind the project. I can see why adding this JAR generation may be against the spirit of the project.\nJackson and SLF4J do not come included One thing that can be also a bit of catch is lack of Jackson and SLF4J implementations included. I understand the reasons for not including them, and the framework makes it clear in the logs it provides that these are required.\nI have used the following dependencies and got everything to work nicely:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;com.fasterxml.jackson.core\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;jackson-core\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.9.4\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;com.fasterxml.jackson.core\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;jackson-databind\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.9.4\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.slf4j\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;slf4j-simple\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.7.25\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; Not a problem, rather something to be aware of.\nThe Javalin Project Javalin is a well-maintained project. That matters.\nThe official website https://javalin.io/ has good documentationand interesting examples. This is very important when choosing frameworks even for personal projects.\nThe project codebase is actively maintained as a Github Repo with regular commits being made. Javalin is written in Kotlin. and there are only 10 contributors, making it an interesting project to get involved in. Apache license makes it a safe choice.\nSummary Javalin is a fascinating micro-framework. I really see it as a sort of NodeJS of the JVM. Something that is needed with the abundance of opinionated and heavy-weight frameworks out there.\nI focused here on using Javalin with Kotlin, but it also works perfectly fine with Java- hence the name!\nIf you are already using Javalin, please- share your experiences in the comments!\n","permalink":"https://e4developer.com/posts/lightweight-kotlin-microservices-with-javalin/","summary":"\u003cp\u003eThere is a lot happening in the JVM space when it comes to microservices development. You have Spring Cloud thriving, Microprofile entering the stage, Vert.x letting you get reactive and Dropwizard being actively developed. What if you want something really simple though? And maybe with some Kotlin? For that you have \u003ca href=\"https://javalin.io/\"\u003eJavalin\u003c/a\u003e!\u003c/p\u003e\n\u003ch3 id=\"introducing-javalin\"\u003eIntroducing Javalin\u003c/h3\u003e\n\u003cp\u003eJavalin is a micro-framework for building simple REST APIs for Java and Kotlin. It comes with embedded Jetty server and is very easy to use.\u003c/p\u003e","title":"Lightweight Kotlin Microservices with Javalin"},{"content":"I have recently gave a talk about being a lead developer in consultancy. I have stressed there the importance of learning in the role of a software consultant. Beyond that, it is good to stay “in shape” when it comes to algorithmic coding. You never know when it may be important- a difficult problem at work or a short notice interview. Let me introduce you to one of my all-time favourite “tools”. Ladies and Gentlemen- HackerRank!\nIn this article I want to share with you ideas on how I use HackerRank to stay “in shape” when it comes to coding. Before we do that, for motivation, a quote from an ancient Greek poet Archilochus:\n*“*We don’t rise to the level of our expectations, we fall to the level of our training.”\nI use HackerRank for training with coding problems. I think it makes good sense to do that from time to time!\nWhy work with algorithmic problems? A lot of people look at a tool like HackerRank and pose the following questions: “Why would I practice with these algorithmic problems when this is not what I do at work?”. Fair question. Here are a few reasons to do that:\nSolving algorithmic problems teaches programming techniques that can be applied in different contexts. Non-trivial efficiency problems come up from time to time in most projects. When you work with algorithmic challenges, you are well equipped to handle them. You may be asked similar questions in an interview. Even if you are not looking for a new job, you may be required to participate in an interview. This is common for us in the consulting business, but it may also happen for start-up employees, as their company is being bought. It makes you a better interviewer. Expert skill in solving these kinds of problems helps you to quickly understand what others are trying to write and debug their code in your head. You may get to practice parts of your programming language that you are not using that often. It can be really fun. I am sure there are many other benefits of doing these kinds of problems. If I missed something important, let me know in the comments.\nWhy HackerRank? There are quite a few websites that let you practice your coding and solve an algorithmic challenge. Why do I like HackerRank the most? A few reasons:\nClean, modern user interface. Maybe it sounds trivial, but it really matters. It is really easy to navigate the website and find problems that are interesting to solve. Once you find your problem, the editor is clean and useful. The whole website is pleasant to use.\nMultiple languages allowed. HackerRank lets you chose multiple different languages. At the time of writing, you can use over 30 programming languages for most of their problems. This is great! Learning Rust, Go, Kotlin? Get your hands dirty by solving some challenges!\nThere is more to it than just solving algorithms. Many similar websites focus entirely on competitive coding. HackerRank gives you more! If you want to learn specific areas such as Functional Programming, Data Structures, even Distributed Systems- there are challenges for that!\nFun competitions. My favourite competition on HackerRank is Week of Code. The idea here is that for a whole week you are getting a new question each day. They get progressively harder as the week progresses. You can then solve them in your own time without crazy time pressure (you still have some less-crazy time pressure though!).\nWhat you can gather from these points is that these guys are really trying to build a great platform and in my eyes- they are succeeding!\nHow to use HackerRank? There are multiple ways to use HackerRank effectively. I found the following to work for me and be the most fun:\n**Get better in specific areas.**HackerRank does a great job at splitting tracks by subdomains. If you know that you need to get better at Graph Theory- you have a section dedicated to that. In fact, the Algorithms are divided by the following subdomains:\nWarmup And these are just the Algorithms… There are other tracks there as well!\nTake part in competitions. Taking part in competitions makes sense for a few different reasons.\nIt is good to push yourself and see how you perform in a more life-like scenario. Competition can provide that by not having a solution waiting for you. You have to either solve it or wait for the competition to be over.\nTaking part in competitions can highlight weaknesses in your coding. Maybe you will find that particular type of questions give you always the most difficulties. With that knowledge, you know what to improve and what to work on.\nCompetitions can be very fun and motivating. Suddenly, learning becomes a game.\nTrying different languages. You can solve problems in multiple different languages. This can be very useful when learning new languages and looking for practical problems to solve!\nAny other websites worth recommending? HackerRank is the website that I use the most for practising my coding skills. There are a few others worth mentioning:\nhttps://www.codingame.com – for programming competitions based around game AI. Very easy to get into, very fun. They really deserve their own blog post for the amazing stuff there are doing. If you have kids- they will love the website as well.\nhttps://www.topcoder.com – for more serious competitive programming\nhttp://codeforces.com – similar to topcoder, good challenges, plenty of competitions\nhttps://www.codechef.com– even more competitive programming. I heard good opinions about it but I did not try it yet\nhttps://www.interviewbit.com– recently recommended to me. It gamifies the experience of practicing for your interview and includes lots of sample problems.\nSummary If you like coding, learning new stuff and a bit of competition (optionally) I think you will really enjoy HackerRank. Personally, I found it very helpful in improving my interviewing skills and general algorithmic competencies.\nI have an account on HackerRank. If you want to beat me in a coding competition, sign up and follow me. I will take part in Week of Code 37 – see you on the leaderboard? Good luck!\nAlso, to be clear- nobody paid me to write this, I am sharing with you my honest and unbiased opinions here.\n","permalink":"https://e4developer.com/posts/keeping-your-coding-skills-sharp-with-hackerrank/","summary":"\u003cp\u003eI have recently gave a talk about being a \u003ca href=\"https://e4developer.com/posts/being-a-lead-developer-on-the-road-presentation/\"\u003elead developer in consultancy\u003c/a\u003e. I have stressed there the importance of learning in the role of a software consultant. Beyond that, it is good to stay “in shape” when it comes to algorithmic coding. You never know when it may be important- a difficult problem at work or a short notice interview. Let me introduce you to one of my all-time favourite “tools”. Ladies and Gentlemen- \u003ca href=\"https://www.hackerrank.com/\"\u003eHackerRank\u003c/a\u003e!\u003c/p\u003e","title":"Keeping your coding skills sharp with HackerRank"},{"content":"On the 14th of March 2018, I had a pleasure of speaking at the While42 – French Tech Engineers Network event. I want to thank my friend Cesar Tron-Lozai and the group for the invitation.\nI am a Lead Developer at Scott Logic – a job that I am passionate about and proud of. During this event, I had a chance to share my passion for that role and share some advice based on my experiences.\nThe two main messages I wanted to convey were:\nBeing a Lead Developer in consultancy is a very people-focused role Values, doing the right thing, are the key During the presentation, I gave practical advice on:\nUpholding values Learning Conversations Mentoring Coaching Influencing Selling Unfortunately, there is no recording of the talk, but if you would like to hear it- you can invite me to your meet-up/event. I really enjoy talking about this subject!\nIn the meantime, you can download the presentation here: Being a Lead Developer on the Road\n","permalink":"https://e4developer.com/posts/being-a-lead-developer-on-the-road-presentation/","summary":"\u003cp\u003eOn the 14th of March 2018, I had a pleasure of speaking at the \u003ca href=\"http://while42.org/\"\u003eWhile42 – French Tech Engineers Network\u003c/a\u003e event. I want to thank my friend \u003ca href=\"https://twitter.com/cesarTronLozai\"\u003eCesar Tron-Lozai\u003c/a\u003e and the group for the invitation.\u003c/p\u003e\n\u003cp\u003eI am a Lead Developer at \u003ca href=\"https://www.scottlogic.com/\"\u003eScott Logic\u003c/a\u003e – a job that I am passionate about and proud of. During this event, I had a chance to share my passion for that role and share some advice based on my experiences.\u003c/p\u003e","title":"Being a Lead Developer on the Road - Presentation"},{"content":"Command Query Responsibility Segregation (CQRS) is a pattern that causes quite a lot of confusion. With the popularity of microservices and the event-based programming model, it is important to know what CQRS is. In this article, I will provide you with a simple explanation.\nTo understand CQRS it is important to get some basic terms and concepts right. The first that often appears next to CQRS is CRUD.\nWhat is CRUD? CRUD stands for Create, Read, Update and Delete. When you think about this, this is what most basic software systems do. You have some records, you may want to read some records, update them, create or delete.\nIf you want to build a system, a reasonable starting point would be using the same model for retrieving object as well as updating object.\nLet’s think of an example here. Assume you want to write a *“Book Store Application”.*You may have a BookInventoryServicethat lets you do things such as add new books to the inventory, mark some of them as loaned out, check if you have a specific book etc. That would be a very simple CRUD system.\nWhat is a Command in the CQRS context? A Command is a method that performs an action. These would be the Create, Update and Delete parts of a CRUD system.\nThere is really not much more to it. In the BookInventoryService adding new books or marking them as loaned out would be carried out by Commands.\nWhat is a Query in the CQRS context? A Query is a method that returns Data to the caller without modifying the records stored. This is the Read part of a CRUD system.\nComing back to *BookInventoryService –*Queries would be responsible for finding details about specific books or checking if a book is loaded out.\nCommand Query Responsibility Segregation (CQRS) Now, when we look at Command Query Responsibility Segregation it may become clearer what it is all about. The goal is to segregatethe responsibilitiesfor executing command and queries.\nThis simply means that in a CQRS system, there would be no place for BookInventoryServicethat is responsible for both queries and commands. You could have BookInventoryInformationService and maybe BookLendingService or more.\nThis does not sound like the most practical thing. And in most cases, this is not practical. If you are not sure if you need CQRS, then don’t impose CQRS on your system.\nWhat CQRS often implies When talking about CQRS people often mention a few other concepts in the same sentence.\nSeparate domain model CQRS does not require using separate domain model for queries and commands. It is often logical to go that route, but you could also use separate domain model for queries and commands, yet do not segregate the responsibilities.\nEvent Sourcing Event Sourcing is not a requirement CQRS. You can find a great explanation of event sourcing on this eventuate.io blog post. I think they capture the essence of how Event Sourcing works pretty spot on:\nA business object is persisted by storing a sequence of state changing events. Whenever an object’s state changes, a new event is appended to the sequence of events. Since that is one operation it is inherently atomic. A entity’s current state is reconstructed by replaying its events.\nLooking at that idea it is clear how CQRS helps. Commands are in effect streams of events that are persisted in the system. Queries then interpret these events. Using same domain model, or not separating these responsibilities would be a mistake.\nChoreographed systems While event sourcing is a radically different architecture, the choreography is often more familiar to microservice developers.\nWhen talking about choreography we mean event-driven distributed systems. Rather than microservices being told what to do explicitly, they subscribe to some event source and react to events as they happen.\nWhile these kinds of systems often implement CQRS, this is not the definition.\nMy thoughts on CQRS As I already mentioned, in most systems it is not necessary to implement CQRS. Moreover, the added complexity may end up detrimental to the system design.\nI think that Choreographed/Event-Drive architecture is often the better choice when designing microservices system of non-trivial complexity. In this context, CQRS may be something that is worth thinking consciously about.\nMany articles on CQRS take your understanding of the basic concept for granted. If you are interested in the pattern and event-driven service, now it is a good time to check them out:\nMartin Fowler on CQRS Greg Young explains the basics of CQRS LosTechies busting some CQRS myths Developing Transactional Microservices Using Aggregates, Event Sourcing and CQRS: parts 1 and part 2 ","permalink":"https://e4developer.com/posts/cqrs-a-simple-explanation/","summary":"\u003cp\u003e\u003cstrong\u003eCommand Query Responsibility Segregation (CQRS)\u003c/strong\u003e is a pattern that causes quite a lot of confusion. With the popularity of microservices and the event-based programming model, it is important to know what CQRS is. In this article, I will provide you with a simple explanation.\u003c/p\u003e\n\u003cp\u003eTo understand CQRS it is important to get some basic terms and concepts right. The first that often appears next to CQRS is CRUD.\u003c/p\u003e\n\u003ch2 id=\"what-is-crud\"\u003eWhat is CRUD?\u003c/h2\u003e\n\u003cp\u003eCRUD stands for Create, Read, Update and Delete. When you think about this, this is what most basic software systems do. You have some records, you may want to read some records, update them, create or delete.\u003c/p\u003e","title":"CQRS - a simple explanation"},{"content":"I have recently spoken at a meetup about Practical Choreography with Spring Cloud Stream. It was a great event where I was asked many questions after the talk. One question got me thinking: *“What book about Spring Cloud do you recommend?”*which as it turns out boils down to “How do you learn Spring Cloud?”. I heard that question posed a few times before in different ways. Here, I will give you my answer on what I think is the best way of learning Spring Cloud.\nWith Spring Cloud being probably the hottest framework on JVM for integrating microservices, the interest in it is growing. Most people interested in the microservices are already familiar with Spring Boot. If you haven’t heard of it before, check out my Spring Boot introduction blog post, and definitely see the official site– it has some very good Getting Started Guides.\nWith that out of the way, let’s look at learning Spring Cloud!\nUnderstand the Scope The first thing to do when trying to learn something so big and diverse is understanding the scope. Learning Spring Cloud can mean many things. First of all, the Spring Cloud currently contains:\nSpring Cloud Config Spring Cloud Netflix Spring Cloud Bus Spring Cloud for Cloud Foundry Spring Cloud Cloud Foundry Service Broker Spring Cloud Cluster Spring Cloud Consul Spring Cloud Security Spring Cloud Sleuth Spring Cloud Data Flow Spring Cloud Stream Spring Cloud Stream App Starters Spring Cloud Task Spring Cloud Task App Starters Spring Cloud Zookeeper Spring Cloud for Amazon Web Services Spring Cloud Connectors Spring Cloud Starters Spring Cloud CLI Spring Cloud Contract Spring Cloud Gateway Wow! This is a lot to take in! Clearly, the number of different projects here means that you can’t learn it by simply going through them one by one with a hope of understanding or mastering Spring Cloud by the end of it.\nSo, what is the best strategy for learning such an extensive framework (or a microservice blueprint, as I describe it in another article)? I think the most sensible ways of learning is understanding what you would like to use Spring Cloud for. Setting yourself a learning goal.\nGoal Oriented Learning What kind of learning goals are we talking about here? Let me give you a few ideas:\nSet up communication between microservices based on Spring Cloud Stream Build microservices that use configuration provided by Spring Cloud Config Build a small microservices system based on Orchestration- what is needed and how to use it Test microservices with Spring Cloud Contract Use Spring Cloud Data Flow to take data from one place, modify it and store it in Elastic Search If you are interested in learning some parts of Spring Cloud, think of an absolutely tiny project and build it! Once you have done it, you know that you understood at least the basics and you validated it by having something working. I will quote Stephen R. Covey here (author of “The 7 Habits of Highly Effective People”):\n“to learn and not to do is really not to learn. To know and not to do is really not to know.”\nWith topics as complex and broad as Spring Cloud, this quote rings very true!\nStudy You picked your goal and you want to get started. What resources can help you? I will give you a few ideas here, but remember- the goal is to learn only as much as necessary in order to achieve your goal. Don’t learn much more just yet, as you may end up overwhelmed and move further away from completing your goal. There will be time to learn more in depth. Let’s assume that your goal is Using Spring Cloud Config correctly in your personal project. Here are the resources I recommend:\nOfficial Spring Cloud Config Quickstart to get a basic idea If you enjoy books and want to learn more Spring Cloud in the future – Spring Microservices in Action is a great reference. Don’t read it all yet! Check out the chapters on Spring Cloud Configuration and read as much as necessary to know what to do. If you use Pluralsight, then check out Java Microservices with Spring Cloud: Developing Services – a very good introduction! Again, start with the chapters on Spring Cloud Config. You can google the topic and find articles like Quick Intro to Spring Cloud Configuration You can even find YouTube videos about Spring Cloud Config I really want to make a point here. There is a huge amount of resources out there, free or paid of very high quality. You can spend weeks just reviewing them, but this is a mistake. Chose what works for you and get moving towards your goal!\nDo something – achieve your goal Once you identified the resources you need, get on with your goal! If your goal was to learn about Spring Cloud Config- set up the server, get the clients connecting and experiment with it.\nYou should have enough information to complete your simple task. If you find that something is not working- great! That shows that you need to revisit the resources and correct your understanding.\nIf you completed your goal, but you want to experiment more with the tech- go for it! You have something working and playing with it is much more fun than reading dry tech documentation.\nBy playing with the technology you start to notice nuances and develop a deeper understanding. Understanding that will not be easily acquired by reading countless articles, as most things would just fly over your head.\nStudy Again Once you completed your goal and played a little with the tech you should have a much better idea what you are dealing with. Now is the time to go deep! Read all you can around the area that you explored. See what you could have done differently, how it is used and what are the best practices.\nNow, all the reading you will do will make much more sense and will be more memorable. Suddenly dry documentation turns into fascinating discoveries of what you could have done better. And the best of all- if something sounds really great- you have your test-bed to try it.\nTeach Teaching others really helps with memorizing and understanding the subject. This is one of the reasons why I am writing this blog. You not only get a chance of sharing your knowledge but also learn yourself by teaching.\nIf blogging is not your thing, you can talk to your colleagues or friends about what you have been tinkering with. You may be confronted with questions or perspectives that you did not consider before- great! Another chance to make the learning more complete.\nOne thing to remember is- don’t be afraid to teach. Even if what you have just learned seems basic to you- it was not so basic before you started learning it! If you were in this position, then so must be countless others!\nThere is a value to the unique way you can explain the subject in your own way. Especially given your practical experience gained from the goal that you achieved.\nStaying up to Date Spring Cloud is constantly changing and growing. If your ultimate goal is becoming an expert in this ecosystem, then you need to think about ways of staying up to date.\nOne thing that is pretty much a must is working with it. If you are not lucky enough to use it on your day job- make sure that you use it in your spare time. You could be building a personal project making use of the tech or simply tinker with it and try different things. What matters is that you actually get that hands-on experience.\nThe second part of staying fresh is knowing whats coming and reading other people experiences. Some of the sources I really enjoy following are:\nThe Spring.io blog with a very good newsletter Baeldung – an amazing source of Spring related articles and a weekly newsletter InfoQ Microservices – huge and very active website maintained by multiple authors Using Twitter to stay up to date and see what people are reading. I share plenty of articles on that topic with my @bartoszjd account. These are just some of the sources that I follow. There are countless others. The point is to choose some that you enjoy reading and keep an eye for exciting stuff.\nConclusion Spring Cloud is a huge and fascinating set of tools for building microservices. It can’t be learned as a “single thing”. Using different goals is the best way of approaching this learning.\nThe idea presented here can be used for learning any technical concept. I found it extremely beneficial for myself and used it with success. I really recommend checking out SimpleProgrammer’s Learning to learn article which describes very similar idea for learning new technologies or frameworks.\nHappy learning!\n","permalink":"https://e4developer.com/posts/how-to-learn-spring-cloud-the-practical-way/","summary":"\u003cp\u003eI have recently spoken at a meetup about \u003ca href=\"https://e4developer.com/posts/practical-choreography-with-spring-cloud-presentation/\"\u003ePractical Choreography with Spring Cloud Stream\u003c/a\u003e. It was a great event where I was asked many questions after the talk. One question got me thinking: *“What book about Spring Cloud do you recommend?”*which as it turns out boils down to \u003cem\u003e“How do you learn Spring Cloud?”.\u003c/em\u003e I heard that question posed a few times before in different ways. Here, I will give you my answer on what I think is the best way of learning Spring Cloud.\u003c/p\u003e","title":"How to learn Spring Cloud - the practical way"},{"content":"JSON Binding (JSON-B) is the new Java EE specification for converting JSON messages to Java Objects and back. JSON is used everywhere and so far we had two main ways of dealing with JSON conversion in Java- using either Jackson or GSON. With the introduction of JSON-B, we have a standard way of handling this conversion. In this article, we will see how Spring Boot 2.0 supports JSON-B, how easy it is to use it and how does it compare with the other options.\nIf you want to know more about JSON-B itself, you can visit the official website or read the four parts Getting started with the JSON API by IBM. I found the fourth part comparing JSON-B, Jackson, and GSON very interesting. Now it is time to see JSON-B in action!\nConverting Java Objects to JSON with Spring Boot 2.0 and JSON-B How do you get JSON-B to work with Spring Boot 2.0? It is very simple. You need to add the required Maven dependencies:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;javax.json.bind\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;javax.json.bind-api\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.eclipse\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;yasson\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.glassfish\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;javax.json\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.1\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; And you need to choose the preffered-json-mapper setting to make sure that JSON-B is chosen. You may get GSON or Jackson on the classpath and then you can’t be sure how the autoconfiguration will work without this setting:\nspring.http.converters.preferred-json-mapper=jsonb With that in place, we are going to write a simple Rest Controller and a simple Car Class, that will make use of the JSON-B conversion:\npackage com.e4developer.jsonbexample; import org.springframework.web.bind.annotation.*; import java.util.Calendar; import java.util.Optional; @RestController public class SimpleController { private Car makeCar() { Car newCar = new Car(); newCar.make = \u0026#34;e4Cars\u0026#34;; newCar.model = \u0026#34;theSensible\u0026#34;; newCar.bonusFeatures = Optional.empty(); newCar.price = 6000; newCar.productionDate = new Calendar.Builder().setDate(2018, 3, 3).build(); return newCar; } @GetMapping(\u0026#34;/car\u0026#34;) public Car car() { Car newCar = makeCar(); return newCar; } } package com.e4developer.jsonbexample; import java.util.Calendar; import java.util.Objects; import java.util.Optional; public class Car { public String make; public String model; public int price; public Calendar productionDate; public Optional\u0026lt;String\u0026gt; bonusFeatures; @Override public String toString() { return \u0026#34;Car{\u0026#34; + \u0026#34;make=\u0026#39;\u0026#34; + make + \u0026#39;\\\u0026#39;\u0026#39; + \u0026#34;, model=\u0026#39;\u0026#34; + model + \u0026#39;\\\u0026#39;\u0026#39; + \u0026#34;, price=\u0026#34; + price + \u0026#34;, productionDate=\u0026#34; + productionDate + \u0026#34;, bonusFeatures=\u0026#34; + bonusFeatures + \u0026#39;}\u0026#39;; } @Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; Car car = (Car) o; return price == car.price \u0026amp;\u0026amp; Objects.equals(make, car.make) \u0026amp;\u0026amp; Objects.equals(model, car.model) \u0026amp;\u0026amp; Objects.equals(productionDate, car.productionDate) \u0026amp;\u0026amp; Objects.equals(bonusFeatures, car.bonusFeatures); } @Override public int hashCode() { return Objects.hash(make, model, price, productionDate, bonusFeatures); } } Now let’s see how the response from that endpoint looks like:\n{ \u0026#34;make\u0026#34;: \u0026#34;e4Cars\u0026#34;, \u0026#34;model\u0026#34;: \u0026#34;theSensible\u0026#34;, \u0026#34;price\u0026#34;: 6000, \u0026#34;productionDate\u0026#34;: \u0026#34;2018-04-03T00:00:00+01:00[Europe/London]\u0026#34; } It is worth noting a few things. Treatment of the Calendar is specific to JSON-B specification and so is the simple disappearing of the missing Optional. I think the treatment of these two concepts is quite clean here.\nThe equivalent response with GSON would be:\n{ \u0026#34;make\u0026#34;: \u0026#34;e4Cars\u0026#34;, \u0026#34;model\u0026#34;: \u0026#34;theSensible\u0026#34;, \u0026#34;price\u0026#34;: 6000, \u0026#34;productionDate\u0026#34;: { \u0026#34;year\u0026#34;: 2018, \u0026#34;month\u0026#34;: 3, \u0026#34;dayOfMonth\u0026#34;: 3, \u0026#34;hourOfDay\u0026#34;: 0, \u0026#34;minute\u0026#34;: 0, \u0026#34;second\u0026#34;: 0 }, \u0026#34;bonusFeatures\u0026#34;: {} } Note the different treatment of the Calendar and the quite peculiar thing that happens to Optional. It is not quite null, but not quite an Object either.\nAnd with Jackson:\n{ \u0026#34;make\u0026#34;: \u0026#34;e4Cars\u0026#34;, \u0026#34;model\u0026#34;: \u0026#34;theSensible\u0026#34;, \u0026#34;price\u0026#34;: 6000, \u0026#34;productionDate\u0026#34;: \u0026#34;2018-04-02T23:00:00.000+0000\u0026#34;, \u0026#34;bonusFeatures\u0026#34;: null } Again, the Calendar is treated differently and the treatment of Optional at least can be seen as sensible- it is clearly a null.\nIf you are interested in a more detailed comparison between JSON-B, GSON, and Jackson standards, I once again recommend the IBM article dealing with the subject.\nI showed you these three different examples to really make a point here. There is a lot of value from an official standard that deals with converting Java to JSON. Of course, JavaEE championed standards were not always successful. The original EJBs were mostly a disaster (sorry if you liked them, but sadly they did not catch on). CDI beans were much better (although they did not get enough traction in my opinion). JPA was on the other hand very successful and it is still very popular.\nI strongly believe that the JSR 367 (this is Java Specification Requests for JSON-B) is here to stay and will bring a lot of good to the JVM ecosystem. I think we all want a more united and more seamless JSON development experience in the Java world.\nWe will see now how these different libraries deal with JSON to Java conversion.\nConverting JSON to Java Objects It is important for a converter to be able to convert both ways between Java Object and JSON without changing the object itself. With the new standard, I wanted to see how well it handles this task.\nWe will use the original Car Object, get the JSON from JSON-B, GSON, and Jackson and then feed it back with a POST to see how the recreated Object looks like. We will check the equality to the Original Object and inspect any potential changes in the payload when returned once again.\nFor that I wrote a simple Controller endpoint:\n@PostMapping(\u0026#34;/sendCar\u0026#34;) public Car sendCar(@RequestBody Car car){ System.out.println(\u0026#34;original car: \u0026#34;+makeCar()); System.out.println(\u0026#34;transformed car: \u0026#34;+car.toString()); System.out.println(\u0026#34;Is the car the same? \u0026#34;+ car.equals(makeCar())); return car; } Conversion with JSON-B JSON-B passes the simple test of returning the same JSON text that it is sent.\nJSON-B fails the equality check! It turns out that JSON-B assigns a naked null rather than the Optional value to the field when it is not present in the JSON! This is very disappointing, as it may cause unexpected bugs. To be clear what happens is:\nJSON-B deals with Optional.empty() by not including it in the JSON response. When JSON parsed into Java Object does not have a value for an Optional\u0026lt;\u0026gt; field it assings it null rather than Optional.empty() This breaks the idea of Optional This is disappointing enough that I hope it will change in the future iterations of the standard.\nConversion with GSON GSON passes the simple test of returning the same JSON text that it is sent.\nGSON also passes the equality check! It correctly deals with the Optional and the Calendar cases.\nConversion with JACKSON Jackson passes the simple test of returning the same JSON text that it is sent.\n**Jackson fails the equality check!**While Jackson is correctly handling the Optional case it fails to deal with the Calendar correctly. It loses the Zone information (in my case London/Europe) and changes it to UTC. This simplification can cause unexpected bugs in different systems.\nConclusion I believe that JSON-B is a great idea for a new standard. JSON became so important that having Java community agree on a good way of dealing with it can be very helpful.\nIt is apparent that dealing with JSON to Java and back is not as trivial as it may seem. As demonstrated here, these conversions may end up causing some unexpected side effects. Personally, I would like JSON-B to be more like GSON- perhaps sacrifice human readability for a clear back to back conversion.\nNo matter which converter you chose, Spring Boot 2.0 makes it easy to use. I am looking forward to JSON-B entering the field as the third worthy contender for your JSON converter of choice.\nThe code used for this article is available on my Github account.\n","permalink":"https://e4developer.com/posts/introducing-json-b-with-spring-boot-2-0/","summary":"\u003cp\u003eJSON Binding (JSON-B) is the new Java EE specification for converting JSON messages to Java Objects and back. JSON is used everywhere and so far we had two main ways of dealing with JSON conversion in Java- using either Jackson or GSON. With the introduction of JSON-B, we have a standard way of handling this conversion. In this article, we will see how Spring Boot 2.0 supports JSON-B, how easy it is to use it and how does it compare with the other options.\u003c/p\u003e","title":"Introducing JSON-B with Spring Boot 2.0"},{"content":"As part of me running this blog, I have decided to create a mailing list and a semi regular newsletter. This is the first edition of that newsletter. If you would like to join it and receive the content to your email, you can sign up here.\nI have started this blog on the 13th January 2018, hoping to share my passion, knowledge and experience with the like minded people online. The first blog post: Starting a blog – why? gives a bit more insights. So far this journey has been amazing.\nNew Articles On Team Building:Helping your team – Draw together!\nHelping your team – Start using pull request\nBuilding services requires building teams\nOn Spring Cloud:\nSpring Cloud – Blueprint for Successful Microservices\nOn Microservices in general:\nStarting with Microservices: Read “Building Microservices”\nJava Enterprise and Microservices – meet Microprofile!\nMicroservices Toolbox – Docker\nMicroservices Toolbox: Spring Boot\nMicroservices – Five benefits from the developer perspective\nWhat you need to know about Spring Boot 2.0 (RC1)\nThe business case for Microservices\nCommon Technical Debt in Microservices\nApplication of GRASP to Microservices\nHATEOAS – a simple explanation\nOn Career:\nSeven Essential Skills for Microservices Developers\nOn Choreography with RabbitMQ\nSetting up RabbitMQ with Spring Cloud Stream\nHandling bad messages with RabbitMQ and Spring Cloud Stream\nTracing messages in Choreography with Sleuth and Zipkin\nOn Spring Cloud Data Flow:\nGetting Started with Spring Cloud Data Flow\nSpring Cloud Data Flow – Making Custom Apps and Using Shell\nSpring Cloud Data Flow – Use Cases\nMy Public Speaking:\nPractical Choreography with Spring Cloud – Presentation\nBig News Spring Boot 2.0 was released. This is a big deal. Check out the official communication!\nSpring Boot 2.0 goes GA\nand the release notes:\nSpring Boot 2.0 release notes\nWould you like to join my mailing list? Subscribe here!\n","permalink":"https://e4developer.com/posts/e4developer-newsletter-february-2018-number-1/","summary":"\u003cp\u003eAs part of me running this blog, I have decided to create a mailing list and a semi regular newsletter. This is the first edition of that newsletter. If you would like to join it and receive the content to your email, you can \u003cstrong\u003e\u003ca href=\"https://www.e4developer.com/newsletter/\"\u003esign up here\u003c/a\u003e.\u003c/strong\u003e\u003c/p\u003e\n\u003cp\u003eI have started this blog on the 13th January 2018, hoping to share my passion, knowledge and experience with the like minded people online. The first blog post: \u003cstrong\u003e\u003ca href=\"https://e4developer.com/posts/starting-a-blog-why/\"\u003eStarting a blog – why?\u003c/a\u003e\u003c/strong\u003e gives a bit more insights. So far this journey has been amazing.\u003c/p\u003e","title":"E4developer Newsletter - February 2018 - Number 1"},{"content":"I have recently spent quite a lot of time playing with Spring Cloud Data Flow (SCDF). It is an amazing platform that can be used for many things. Talking about it with some of my colleagues I realized that not everyone knows what are the common use cases. Thinking about it further I realized that I don’t know the full scope of capabilities and business problems that it can solve! In this article I look at different uses for Spring Cloud Data Flow based on what the platform offers and actual stories from companies using it in production. The examples come from Spring One Platform 2017 conference.\nBefore going into use cases lets just make sure that we understand what Spring Cloud Data Flow is. Taken from the official website:\nSpring Cloud Data Flow is a toolkit for building data integration and real-time data processing pipelines.\nPipelines consist of Spring Boot apps, built using the Spring Cloud Stream or Spring Cloud Task microservice frameworks. This makes Spring Cloud Data Flow suitable for a range of data processing use cases, from import/export to event streaming and predictive analytics.\nReally, I could not be more concise and precise here. If you are curious about familiarizing yourself more with the Data Flow I wrote a Getting Started article on this blog. With the basics established, we can look at what such a powerful platform can be used for.\nWhat can Spring Cloud Data Flow be used for? Lets explore the core things that can be done with Spring Cloud Data Flow\nETL processing between file systems and databases – Being a platform for orchestration, building Extract, Transform, Load(ETL) processes is one of the core strength of Spring Cloud Data Flow. These ETL’s can be made real-time with streaming or as a batch processes. GUI designer makes designing the workflows easy and enjoyable. **Building real time data analytics –**Because it is so easy to create and configure the pipelines, Data Flow can be used for building real time analytics. There are multiple good example uses of twitter analytics. Integrating data oriented microservices – Moving data between microservices is made easier with the platform. Microservices are supposed to own their own data, but that does not mean, that the data does not have to be moved. This can be seen as a sub-case of the ETL use case. Event streaming – If you want to use messaging in your system, but you want to have only explicit flows (as opposed to the choreographed approach), Spring Cloud Data Flow is the tool that you were looking for. What are different companies using Spring Cloud Data Flow for? This section focuses on documented use cases from real world production deployments of Spring Cloud Data Flow. All the talks and videos here are from the SpringOne Platform 2017 by Pivotal. Big thanks to Pivotal for making them available on YouTube!\nCoreLogic – Batch processing of risk calculations Batch processing is a perfect candidate for modernization with Spring Cloud Data Flow. CoreLogic gave an excellent presentation about their journey with the platform that concluded with new feature delivered to production in much faster time than it was possible before:\nHealth Care Service Corporation – Large Scale ETL processing HCSC is making use of Spring Cloud Streams for processing vasts amounts of data in a Cloud native environment. They also gave a great presentation at SpringOne Platform, although they don’t mention Spring Cloud Data Flow there. Given that Spring Cloud Stream is a core component of the Data Flow I still think this is a very informative presentation, and definitely worth watching when talking about Data Flow use cases:\nCharles Schwab – Processing Trade Events At SpringOne Platform, Charles Shwab gave a presentation explaining that they use Spring Cloud Data Flow for processing their trade events in real time. You can watch the presentation here, although it focuses mostly on tracing with Sleuth and Zipkin (interesting in its own rights) in the context of asynchronous processing:\nBONUS: Liberty Mutual – Deconstructing monolith with Domain Driven Design This presentation is about much more than Spring Cloud Stream, but if you are thinking to make use of Domain Driven Design, deconstructing monoliths or event sourcing, it is an amazing watch:\nSummary Spring Cloud Data Flow is a tool that has many uses cases- orchestrating event streams, batch processing, data analytics and more. What is reassuring is that despite being a relatively new product it is being adopted all over the world by world class organisations. With an extensive production-use it becomes a viable choice as a data integration tool for modern companies. I am absolutely sure that in the coming months and years we will see more fascinating use cases of the platform as it gains popularity. If you are using Spring Cloud Data Flow in production already- let us know in the comments!\n","permalink":"https://e4developer.com/posts/spring-cloud-data-flow-use-cases/","summary":"\u003cp\u003eI have recently spent quite a lot of time playing with Spring Cloud Data Flow (SCDF). It is an amazing platform that can be used for many things. Talking about it with some of my colleagues I realized that not everyone knows what are the common use cases. Thinking about it further I realized that I don’t know the full scope of capabilities and business problems that it can solve! In this article I look at different uses for Spring Cloud Data Flow based on what the platform offers and actual stories from companies using it in production. The examples come from \u003ca href=\"https://springoneplatform.io/2017\"\u003eSpring One Platform 2017\u003c/a\u003e conference.\u003c/p\u003e","title":"Spring Cloud Data Flow - Use Cases"},{"content":"Last week I wrote about getting Started with Spring Cloud Data Flow. This week I want to show you a few more things that you can do with this amazing platform. In this article I will show you how to make your own Apps that can be part of Data Flow Streams and how to use the Data Flow Shell to control the platform.\nI assume here that you know how to get you Data Flow up and running and you are familiar with the basics of the platform. If not- don’t worry! Check out my Getting Started with Spring Cloud Data Flow article to learn the basics.\nIntroducing Spring Cloud Data Flow Shell As you know, to control Data Flow you can use the graphical Dashboard available as part of the platform. Sometimes, this is not the most efficient way of working. You can download Shell application as well:\nwget https://repo.spring.io/release/org/springframework/cloud/spring-cloud-dataflow-shell/1.3.1.RELEASE/spring-cloud-dataflow-shell-1.3.1.RELEASE.jar\nI would go as far as saying that you should download the Shell if you are serious about working with the Data Flow. Once you have it on your machine it can be run like any other jar from the command line:\njava -jar spring-cloud-dataflow-shell-1.3.1.RELEASE.jar\nIf it started successfully you should see the following screen:\nWhat can you do with the Shell? Most of the things that you can do with the Graphical Dashboard, but often faster and more reliably. You don’t get the graphical analytics for obvious reasons. To see what sort of command you have at your disposal you can type help:\nLets now build a custom application before we start playing with the Shell. We will use the Shell later to register that App and build a new Stream.\nBuilding Custom App for Spring Cloud Data Flow Building Apps for Spring Cloud Data Flow is very simple. If you are familiar with Spring Cloud Stream, you already know how to do it! A quick reminder- there are three types of Apps that you can create:\nSource – These are the available sources of data. You start your streaming pipelines from them. Processor – These take data and send them further in the processing pipeline. They sit in the middle. Sink – They are the endpoints for the streams. This is where the data ends in the end. These three concepts are understood by Spring Cloud Stream as well.\nI want to create a simple Stream that takes sample of Tweets from Twitter, extracts just the text and saves that all to a file. Spring Cloud Data Flow Starters already give me the Twitter Source and a File Sink. That means that I need to write a Processor that would extract just the text from each individual tweet, mark the start and the end and send that to the Sink.\nLets start with the POM file. We need the standard Spring Cloud Stream dependencies. Since the POM is not too large I will show it here in its entirety (if you would like to just copy paste):\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;project xmlns=\u0026#34;http://maven.apache.org/POM/4.0.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\u0026#34;\u0026gt; \u0026lt;modelVersion\u0026gt;4.0.0\u0026lt;/modelVersion\u0026gt; \u0026lt;groupId\u0026gt;com.e4developer\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;tweet-processor\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;0.0.1-SNAPSHOT\u0026lt;/version\u0026gt; \u0026lt;packaging\u0026gt;jar\u0026lt;/packaging\u0026gt; \u0026lt;name\u0026gt;tweet-processor\u0026lt;/name\u0026gt; \u0026lt;description\u0026gt;Demo project for Spring Boot\u0026lt;/description\u0026gt; \u0026lt;parent\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-parent\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.5.10.RELEASE\u0026lt;/version\u0026gt; \u0026lt;relativePath/\u0026gt; \u0026lt;/parent\u0026gt; \u0026lt;properties\u0026gt; \u0026lt;project.build.sourceEncoding\u0026gt;UTF-8\u0026lt;/project.build.sourceEncoding\u0026gt; \u0026lt;project.reporting.outputEncoding\u0026gt;UTF-8\u0026lt;/project.reporting.outputEncoding\u0026gt; \u0026lt;java.version\u0026gt;1.8\u0026lt;/java.version\u0026gt; \u0026lt;spring-cloud.version\u0026gt;Edgware.SR2\u0026lt;/spring-cloud.version\u0026gt; \u0026lt;/properties\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-actuator\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-starter-stream-rabbit\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-test\u0026lt;/artifactId\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-stream-test-support\u0026lt;/artifactId\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;dependencyManagement\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-dependencies\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;${spring-cloud.version}\u0026lt;/version\u0026gt; \u0026lt;type\u0026gt;pom\u0026lt;/type\u0026gt; \u0026lt;scope\u0026gt;import\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;/dependencyManagement\u0026gt; \u0026lt;build\u0026gt; \u0026lt;plugins\u0026gt; \u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-maven-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;/plugin\u0026gt; \u0026lt;/plugins\u0026gt; \u0026lt;/build\u0026gt; \u0026lt;/project\u0026gt; And now the only file you actually have to edit manually in order to build this custom App. If you are using a good IDE that POM can be generated by an integrated Spring Initializr. The TweetProcessorApplication in its entirety:\npackage com.e4developer.tweetprocessor; import com.fasterxml.jackson.core.type.TypeReference; import com.fasterxml.jackson.databind.ObjectMapper; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.cloud.stream.annotation.EnableBinding; import org.springframework.cloud.stream.annotation.StreamListener; import org.springframework.cloud.stream.messaging.Processor; import org.springframework.cloud.stream.messaging.Sink; import org.springframework.messaging.handler.annotation.SendTo; import java.util.Map; @EnableBinding(Processor.class) @SpringBootApplication public class TweetProcessorApplication { public static void main(String[] args) { SpringApplication.run(TweetProcessorApplication.class, args); } @StreamListener(target = Sink.INPUT) @SendTo(Processor.OUTPUT) public String extractUrls(String tweet) throws Exception { ObjectMapper mapper = new ObjectMapper(); Map\u0026lt;String, Object\u0026gt; tweetMap = mapper.readValue(tweet, new TypeReference\u0026lt;Map\u0026lt;String,Object\u0026gt;\u0026gt;(){}); return \u0026#34;START-TWEET-TEXT:\u0026#34;+tweetMap.get(\u0026#34;text\u0026#34;)+\u0026#34;:END-TWEET-TEXT\u0026#34;; } } You can notice the key annotation: @EnableBinding(Processor.class)that marks this application as a Processor. The other interesting are the @StreamListener(target = Sink.INPUT) and @SendTo(Processor.OUTPUT)annotations. You may wonder- which INPUT and OUTPUT does this actually link to? Is there any need to configure anything? Great news! There is no configuration whatsoever needed here. This is all for Data Flow to be configured when the App is used as part of a Stream.\nI made this App available on GitHub if you want to clone it.\nAdding Custom App to a Data Flow Stream Now that we have the simple TweetProcessorApp it is time to add it to the Data Flow. The first thing we will do is install it into our local Maven repository. You are probably familiar with Maven, but just in case: go the project directory and run the command:\nmvn install\nIf this ends with SUCCESS, then you have the app available in your local Maven repository.\nTo install the App you need to know its maven coordinates. In this case the project is defined in the POM as follows:\n\u0026lt;groupId\u0026gt;com.e4developer\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;tweet-processor\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;0.0.1-SNAPSHOT\u0026lt;/version\u0026gt; \u0026lt;packaging\u0026gt;jar\u0026lt;/packaging\u0026gt; Making the Maven URI look like that: maven://com.e4developer:tweet-processor:0.0.1-SNAPSHOT. With that knowledge, lets open the Data Flow Shell once again and type:\ndataflow:\u0026gt;app register --name tweet-processor --type processor --uri maven://com.e4developer:tweet-processor:0.0.1-SNAPSHOT\nThis will register the app under the name tweet-processor. You should see the message: “Successfully registered application ‘processor:tweet-processor\u0026rsquo;”, but lets just be double sure here and play a bit more with the Shell itself. Type into the shell:\napp list\nto see all the registered apps, and among them the tweet-processor:\nNow it is time to create that Stream. This is going to be quite a large command. When using twitterstream App from the starter apps you need to provide it with different keys from your twitter account. To get yourself one check out the Twitter Application Management page.\nTo create and deploy a Stream from the Shell you need the following command:\nstream create STREAM_NAME --definition \u0026quot;STREAM_DEFINITION\u0026quot; --deploy \u0026quot;true\u0026quot;\nThe definition of the Stream we described (Read tweets -\u0026gt; get the text -\u0026gt; save to a file) follows:\ntwitterstream --access-token-secret=SECRET --access-token=SECRET --consumer-secret=SECRET --consumer-key=SECRET --stream-type=sample | tweet-processor | file --directory=c:\\scdf\nOf course for this to actually work you need to replace the SECRET with relevant secrets that you can get from your Twitter Application Page.\nBy combining the definition and deployment command we get:\nstream create super-tweets --definition \u0026quot;twitterstream --access-token-secret=SECRET --access-token=SECRET --consumer-secret=SECRET --consumer-key=SECRET --stream-type=sample | tweet-processor | file --directory=c:\\scdf\u0026quot; --deploy \u0026quot;true\u0026quot;\nOnce this gets written into the Shell we should see:\nAnd the Stream should be deployed and working! You can see all these crazy Tweet messages being saved in c:\\scdf:\nSummary What you have read here is quite simple, but very important. Defining custom Apps lays at the core of what Spring Cloud Data Flow can be used for. Using Shell for working with the platform can be incredibly useful. With this knowledge you will be able to start doing some very powerful things with Data Flow. In the future articles I will look at more complex Streams and the built Analytics that you get with the Spring Cloud Data Flow.\n","permalink":"https://e4developer.com/posts/spring-cloud-data-flow-making-custom-apps-and-shell/","summary":"\u003cp\u003eLast week I wrote about getting Started with \u003ca href=\"https://e4developer.com/posts/getting-started-with-spring-cloud-data-flow/\"\u003eSpring Cloud Data Flow\u003c/a\u003e. This week I want to show you a few more things that you can do with this amazing platform. In this article I will show you how to make your own Apps that can be part of Data Flow Streams and how to use the Data Flow Shell to control the platform.\u003c/p\u003e\n\u003cp\u003eI assume here that you know how to get you Data Flow up and running and you are familiar with the basics of the platform. If not- don’t worry! Check out my \u003ca href=\"https://e4developer.com/posts/getting-started-with-spring-cloud-data-flow/\"\u003eGetting Started with Spring Cloud Data Flow\u003c/a\u003e article to learn the basics.\u003c/p\u003e","title":"Spring Cloud Data Flow - Making Custom Apps and Using Shell"},{"content":"When talking about microservices (or agile development), the idea of self-managed, independent teams often comes up. What is not emphasized enough is how vital this idea is to the successful adoption of such highly decoupled architecture. In this article, I will share my experiences about working with independent teams and being part of them.\nMost successful projects that I was either part of, or had a pleasure of working with had one thing in common. They were delivered by either one or a multiple of high-performing, self-managed, independent teams. You may say: every sizable project nowadays is delivered by a team! I disagree, most medium and large projects are delivered by groups of people, but in many cases, I would not call them teams. In my opinion, it takes more than grouping a few people together and slapping a name on them to make a team…\nWhat is a team? What is a team then? I will answer that in the context of software development of course. I define a team as: “Team is a group of people, working together on a number of shared goals”. Let’s unpack that sentence:\nGroup of people – you don’t need any specific roles to make a team. You need a group of people- so you need at least two people (although that would be a very small group). Working together – this is where many “teams” fail. Working together goes beyond just working in the same room, or on the same project. There needs to be collaboration. One person helping another. If someone is not an expert in a given technology, she should not be afraid to ask a teammate for help. If someone needs help, other members are stepping in. The work is done together, not alone. Shared goals – goals should be set at a team level. If there is some UI work to be done, it is not given directly to the UI expert, rather the team decides how to deal with that. Maybe there is another person that could do the task only with the minimal help of the single UI expert on the team? Finding the best way of doing the work is left to the team. Shared goals delivered by a group of people truly working together- this is how you spot a team. What is an independent team? So what is an independent team? Of course, no-one is truly independent in most organizations, so lets narrow the description of what sort of independence we are talking about here.\nWe want teams that can deliver their goals with minimum dependencies on people outside of the team. The ultimate goals for most software development teams are working features and products in production. Let’s see what things can stop the team from being independent:\nReliance on testing done outside of the team Reliance on separate teams deploying the service The design is done outside of the team Reliance on business knowledge not easily available to the team Inability to communicate with other teams to get their services integrated Any other person or process dependency stopping the team from delivering the work efficiently I mentioned things that can block a team whose goal is to deliver features to production. There are different teams out there whose goals may be different. If your team is tasked with building automated testing for an already existing application, your dependencies can differ. If you work on the sort of software that is not deployed in production (i.e. software installed on end-user devices), dependencies would be different again.\nOwnership of a service To achieve independence and general success in your microservices development, each microservice should be owned by a dedicated team. Microservices are small enough that they should be easily managed by one team. If there is more than one team owning a single microservice, these will impede their independence. Suddenly the teams (likely working on separate goals) are dependent on each other when making changes to the service.\nCan teams manage more than one service? To be clear- each team can own multiple services. Once work is mostly finished on a given microservice, there is nothing stopping a team from taking ownership of another one. It is actually expected and completely normal for one team to own multiple microservices. The challenge for the team is keeping the microservices independent from each other. This is a technical topic that I explore in a dedicated blog post.\nDevOps is a requirement, not an option If you want to have truly high performing teams, the developers and operations have to really come together. Ideally, you would want your team to be responsible for deploying the services, but for many companies, this is not practical yet.\nA potential solution is teaming up with the Operations closely enough that Developers end up being closely involved in the provisioning of the service. This has many benefits:\nTechnical debt on both sides is made clear. Any challenges that operations are facing become transparent to the developers and become their challenges as well Problems with deployment can be investigated rapidly. Developers know the system better so that they can understand system errors quicker The service team becomes more independent, as they understand the deployment process better. They can get ahead of potential issues faced by operation Building communication channels that might not have existed before Many more- often specific to your own organization DevOps is a culture rather than a specific process. Make sure that your company is adopting that culture, as this does not happen overnight.\nStructure of an independent team There is much talk about the structure of independent teams. Should these include people traditionally involved in operations? Should they include manual testers?\nIn the best scenario, they should include anyone who spends the majority of their day working towards fulfilling the team’s goals (even though they may not be members of the team yet). This list includes but is not limited to:\nTesters Operations Architects Business Analyst Developers The structure of the team promotes collaboration and working together. If there is a chance to get more people collaborating closely, it is a chance worth exploring.\nRoles that operate across multiple teams There is a number of roles that can operate across different teams. Architects or business analysts for example. You can then think of your teams on two levels:\nService teams – these are the teams that this article is mainly about. Teams delivering services, each service should belong to a single team Project teams – much larger, combining multiple service teams and people that may work across different service teams. One project team can contain multiple Service teams Some companies do not have projects as such and there are teams that work on multiple projects… Well, in this case, we are dealing with different specialized teams (information security or enterprise architecture, are good examples) that have their own goals, potentially different than service teams. The key to building a successful working relationship with them is communication.\nConway’s law in action – the importance of communication You might have heard about Conway’s law before. Coined by Melvin Conway it goes like that:\n“organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.”\nNow, imagine the kind of systems that are produced by multiple teams where the communication is broken… These are mostly the broken systems, failed projects. Nobody wants that!\nHow do you avoid broken communication- by actively promoting good communication! I don’t know what your role is in the project, but whatever it is you should be doing your bit. If you can, make sure that communication works well.\nGet a good chat program. You need a program where you can have team channels and global channels. A few spring to mind: Slack, Microsoft Teams, HipChat. I have seen project transform in unbelievable ways by something as trivial as a good chat program being introduced.\nGet people to know each other. If you are lucky enough to be collocated with another teams, make sure that people know each other and are not afraid to talk. You can do that by different presentations and activities in the office and you can do it outside of the office. Organizing cross-team lunches, after work drinks, integration activities go long way. If your teams are not co-located- do what you can. Video conferencing can help.\n**Celebrate each other successes.**Amazing things happen when people share each other successes. You need a mindset that one teams success is also another teams success. You are in it together. Once people start seeing it this way, they will naturally start collaborating more.\nThere are more ways to foster good communication and they probably require their own blog post!\nAutomation as a mean of achieving independence Assuming that you have the team structures and communication solved-what more can be done to promote independence? I have noticed that one thing that helps a team become independent is automation.\nAutomation of testing, automation of deployment, automation of admin processes. The more things you can automate the less you need to rely on people. Often overworked people, that are not part of your team. Whenever your team relies on someone from outside of the team to do something- see if this can be automated to remove the dependency. Chances are that all parties involved will be happy.\nThe importance of a long term vision The last thing to mention is that there should be a long term vision/goal available to the team. It is great when the team is truly independent, but the team should never lose the sight of what the company is trying to achieve. Every team should be aware of why what they are delivering is important and how the future looks like. This will not only motivate the team, but also prevent them from making mistakes by taking a decision that may jeopardize the future goals of the project.\nSummary Building teams is a large topic. There are whole books being written on the subject. I hope this article gives you an idea about the importance of having great teams and gives you some ideas where to look for improvements. Working as a part of high performing team is among the most rewarding experiences I had as a software developer. I hope you will have that experience too. If you are in a position to build such teams- make it happen. Great software is delivered by great teams.\n","permalink":"https://e4developer.com/posts/building-services-requires-building-teams/","summary":"\u003cp\u003eWhen talking about microservices (or agile development), the idea of self-managed, independent teams often comes up. What is not emphasized enough is how vital this idea is to the successful adoption of such highly decoupled architecture. In this article, I will share my experiences about working with independent teams and being part of them.\u003c/p\u003e\n\u003cp\u003eMost successful projects that I was either part of, or had a pleasure of working with had one thing in common. They were delivered by either one or a multiple of high-performing, self-managed, independent teams. You may say: \u003cem\u003eevery sizable project nowadays is delivered by a team!\u003c/em\u003e I disagree, most medium and large projects are delivered by groups of people, but in many cases, I would not call them \u003cem\u003eteams\u003c/em\u003e. In my opinion, it takes more than grouping a few people together and slapping a name on them to make a team…\u003c/p\u003e","title":"Building services requires building teams"},{"content":"On the 15th February 2018 I had a pleasure to speak at The JVM Roundabout Meetup in London.\nMy topic was “Practical Choreography with Spring Cloud”. The idea behind the talk was to give practical advice on introducing choreography into microservices architecture. The emphasis is on practical. Choreography can be intimidating, especially for those not familiar with the pattern. Since I am a big advocate of Spring Cloud and I think it is a great framework for newcomers and experts alike, I demonstrate how it can be used to make that adoption easier.\nIf you are interested in the topic, if you would like to introduce choreography into your architecture, watch my talk that is available on YouTube:\nI have made the slides used for this talk also available. You can download the slides here: Practical-Choreography-with-Spring-Cloud-JVM-Roundabout.pptx\n","permalink":"https://e4developer.com/posts/practical-choreography-with-spring-cloud-presentation/","summary":"\u003cp\u003eOn the 15th February 2018 I had a pleasure to speak at \u003ca href=\"https://www.meetup.com/JVM-Roundabout/\"\u003eThe JVM Roundabout\u003c/a\u003e Meetup in London.\u003c/p\u003e\n\u003cp\u003eMy topic was \u003cem\u003e“Practical Choreography with Spring Cloud”.\u003c/em\u003e The idea behind the talk was to give practical advice on introducing choreography into microservices architecture. The emphasis is on practical. Choreography can be intimidating, especially for those not familiar with the pattern. Since I am a big advocate of Spring Cloud and I think it is a great framework for newcomers and experts alike, I demonstrate how it can be used to make that adoption easier.\u003c/p\u003e","title":"Practical Choreography with Spring Cloud - Presentation"},{"content":"In this article, I will show you how you can get started with Spring Cloud Data Flow. Spring Cloud Data Flow is an amazing platform for building data integration and processing pipelines. It has a very user-friendly graphical dashboard where you can define your streams, making your work with data an absolute pleasure.\nThe goal of this article is to have you learn to build some simple data pipelines by the time you are finished reading. Before we get started there are a few system requirements:\nYou should have JDK 8 installed, as at the time of writing Spring Cloud Data Flow is somewhat tricky to get to work with JDK 9 (missing JAXB libraries) You should have Docker installed. If you are not sure why this is useful, I have written an article explaining docker use as a development tool. If you still don’t want to install docker, you need to be able to get MySql, Redis, and RabbitMQ accessible from your machine. You should have Apache Maven installed on your machine. The official installation guide should be easy enough to follow. Assuming that you have the tools required, we can get started!\nGetting Spring Cloud Data Flow Server up and running As I have mentioned, in order to get the platform running, you need some middleware. The first is RabbitMQ, you could use Kafka for your stream communication, but for the simplicity of this tutorial we are going to go with RabbitMQ:\ndocker run --name dataflow-rabbit -p 15672:15672 -p 5672:5672 -d rabbitmq:3-management\nRunning this command will start RabbitMQ Docker container on your machine exposed on the default ports. You will also get a management console that will let you check on the status of your broker.\nIn order to get analytics from Spring Cloud Data Flow, you will need Redis as well. This is not 100% required, but since it is not much hassle- let’s get it started. If you are running Data Flow in a production deployment you will definitely want it:\ndocker run --name dataflow-redis -p 6379:6379 -d redis\nThe last pre-requisite is a MySql instance. If you do not have it, you will end up with an in-memory H2 database powering Data Flow. The problem with that is that you will lose all your data on a server restart. This could be actually desirable for testing, but incredibly frustrating if you invest some time in configuring your Streams only to lose them on a server restart. While creating the container, we will set a custom password and create a database for Data Flow:\ndocker run --name dataflow-mysql -e MYSQL_ROOT_PASSWORD=dataflow -e MYSQL_DATABASE=scdf -p 3306:3306 -d mysql:5.7\nWith these three docker containers up and running you are ready to get the Data Flow server and get it started. You can download it from here: https://repo.spring.io/libs-release/org/springframework/cloud/spring-cloud-dataflow-server-local/1.3.0.RELEASE/spring-cloud-dataflow-server-local-1.3.0.RELEASE.jar. This is the latest version at the time of writing, the official project website could have a more up-to-date link, but it is not guaranteed to work exactly the same.\nWe have downloaded the localversion of the server. That means that the different applications composing our Streams will be deployed as local Java processes. There are Cloud Foundry and Kubernetes versions of the server available if you want something more production-ready.\nTime to start the server. We will pass MySQL and RabbitMQ parameters in this starting command. Redis default properties are good enough:\njava -jar spring-cloud-dataflow-server-local-1.3.0.RELEASE.jar --spring.datasource.url=jdbc:mysql://localhost:3306/scdf --spring.datasource.username=root --spring.datasource.password=dataflow --spring.datasource.driver-class-name=org.mariadb.jdbc.Driver --spring.rabbitmq.host=127.0.0.1 --spring.rabbitmq.port=5672 --spring.rabbitmq.username=guest --spring.rabbitmq.password=guest\nSpring Cloud Data Flow – First Look Hopefully, your server started without a problem and you are seeing something like this in your console:\nIf you want you can look into your MySQL instance where you should see a bunch of tables created:\nTime to see Spring Cloud Data Flow itself! Go to http://localhost:9393/dashboard to see the dashboard:\nIt looks a bit empty! This is because we did not load any starter apps. Spring Cloud Stream App Starters is a project that provides a multitude of ready-to-go starter apps for building Streams. You can read from FTP, HTTP, JDBC, twitter and more, process and save to a multitude of sources. There are three main concepts to which each Application can belong:\nSource – These are the available sources of data. You start your streaming pipelines from them. Processor – These take data and send them further in the processing pipeline. They sit in the middle. Sink – They are the endpoints for the streams. This is where the data ends in the end. These are being constantly added, and you can see the up-to-date list on the official project site. Currently we have:\nSource Processor Sink file aggregator aggregate-counter ftp bridge cassandra gemfire filter counter gemfire-cq groovy-filter field-value-counter http groovy-transform file jdbc header-enricher ftp jms httpclient gemfire load-generator pmml gpfdist loggregator python-http hdfs mail python-jython hdfs-dataset mongodb scriptable-transform jdbc mqtt splitter log rabbit tasklaunchrequest-transform mongodb s3 tcp-client mqtt sftp tensorflow pgcopy syslog transform rabbit tcp twitter-sentiment redis-pubsub tcp-client router time s3 trigger sftp triggertask task-launcher-cloudfoundry twitterstream task-launcher-local task-launcher-yarn tcp throughput websocket This is an impressive list! So how do we get them into the Spring Cloud Data Server? Could not be easier! First, we are going to use the RabbitMQ + Mavenflavour for the starters, as this is how we set up the server. From the project website, the URL for the stable release is http://bit.ly/Celsius-SR1-stream-applications-rabbit-maven. We can supply this to the Data Flow server. First, click:\nAnd then populate the URI and click the Import button:\nIf all went well, then you should see multiple starter apps available.\nBuilding our first Data Flow Stream We are now ready to build the first Data Flow Stream. To do this we will head to the Streams tab on the Dashboard and click the Create Stream button:\nHere, we will create a stream that reads from HTTP endpoint, upper-cases the content and saves it all to a file in c:/dataflow-output (if you are on Windows, otherwise you can choose different directory). The aim of this exercise is to show you how Source, Processor and Sink connect together and how seamless it all is! Let’s drag and drop the following into the workspace:\nSource – HTTP Processor – transform Sink – file You should see the following:\nAs you can see there are red exclamation marks displayed. That means that the Stream is not healthy. You should click on the tiny squares in the graphical representation to connect the streams, or alternatively, in the text field, you can specify how the Stream should be composed:\nhttp | transform | file\nWith that, we just need to configure our stream accordingly. This can be done by either clicking on the graphical representation cog-wheel icon that appears when selected:\nOr by using the text field. One thing that you get from that graphical interface is quite a nice way of inputting the properties. For example to configure that HTTP source we can simply set the port like that:\nLets set the remaining properties via the text-field. The final Stream description should look like this:\nhttp --port=7171 | transform --expression=payload.toUpperCase() | file --directory=c:/dataflow-output\nGreat! We have the first stream, now lets click Create Stream button visible just above the input text-field and set the stream name to Upper-Case-Stream:\nI have ticked the Deploy stream(s) box to have the stream automatically deployed. The stream should be deployed shortly:\nTrying out the Stream It would not be much fun to just create the stream and not try it! To do that you can get Postman running to send a few requests to that HTTP endpoint:\nYou will quickly see that there are relevant queues and exchanges created in the connected RabbitMQ instance:\nAnd finally looking into that file and directory that we wanted to save the results of our Stream:\nCongratulations! You have made it through the creation of your first Spring Cloud Data Flow Stream!\nWhere from here? I hope that reading this introduction got you excited about using Spring Cloud Data Flow- I certainly enjoyed writing about it! You should be aware that there is also Spring Cloud Data Flow Shell available if you need to work with the platform in a shell only environment (or if you prefer to!).\nThere is much more to Spring Cloud Data Flow. You can create your own Sources, Processors, and Sinks. You can create Tasks (run on demand) processes rather than Streams. There are complicated processing workflows that you can design. This all is to be discovered and used- hopefully with the knowledge from this article you are ready to start exploring yourself.\n","permalink":"https://e4developer.com/posts/getting-started-with-spring-cloud-data-flow/","summary":"\u003cp\u003eIn this article, I will show you how you can get started with Spring Cloud Data Flow. Spring Cloud Data Flow is an amazing platform for building data integration and processing pipelines. It has a very user-friendly graphical dashboard where you can define your streams, making your work with data an absolute pleasure.\u003c/p\u003e\n\u003cp\u003eThe goal of this article is to have you learn to build some simple data pipelines by the time you are finished reading. Before we get started there are a few \u003cstrong\u003esystem requirements\u003c/strong\u003e:\u003c/p\u003e","title":"Getting Started with Spring Cloud Data Flow"},{"content":"HATEOAS – Hypermedia as the Engine of Application State, a name long enough to intimidate and confuse. Behind this complicated name we have a rather simple and elegant idea. In this blog post, I explain what HATEOAS is and how it can be practically used to build more stable systems.\nHATEOAS is a way of designing a REST API. More precisely it is a specific constraint of a REST architecture. It can be summed up with:\nAPI that guides the client through its usage.\nRephrasing it for clarity with some extra details:\nAPI that describes in its responses how it can be used. By providing URLs to other allowed actions.\nLets look at an example. Assuming that you have a REST service that provides different productsdescriptions; think about some e-commerce website. If you get a JSON response with a product from that website, and if it was using HATEOAS it could look something like that:\n{ \u0026#34;productId\u0026#34;: \u0026#34;123\u0026#34;, \u0026#34;productName\u0026#34;: \u0026#34;Super Microwave\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;The best microwave in the world. Fact.\u0026#34; \u0026#34;links\u0026#34;: [{ \u0026#34;rel\u0026#34;: \u0026#34;self\u0026#34;, \u0026#34;href\u0026#34;: \u0026#34;http://localhost:8080/super-shop/api/products/123\u0026#34; }, { \u0026#34;rel\u0026#34;: \u0026#34;details\u0026#34;, \u0026#34;href\u0026#34;: \u0026#34;http://localhost:8080/super-shop/api/products/123/details\u0026#34; }, { \u0026#34;rel\u0026#34;: \u0026#34;addToCart\u0026#34;, \u0026#34;href\u0026#34;: \u0026#34;http://localhost:8080/super-shop/api/addToCart/123\u0026#34; }] } Here, you can see a few links added. In this implementation (inspired by Spring HATEOAS) you have two fields:\nrel – stands for ‘relationship’ and explains how the link relates to the object that you requested for. self – meaning, this is the link to the object, details– this could be a more detailed information available, addToCart– that would be a way of adding this product to a shopping cart. href – a complete URL that shows how the action can be performed This should be clear now, but there is a bit more to HATEOAS. As the name means Hypermedia as the Engine of Application State, there should be some relation to that response and the application state. Imagine now that there was a product that was not available for purchase. The hypothetical response could look like:\n{ \u0026#34;productId\u0026#34;: \u0026#34;345\u0026#34;, \u0026#34;productName\u0026#34;: \u0026#34;The Philosophers Stone\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;Transforms anything into gold\u0026#34; \u0026#34;links\u0026#34;: [{ \u0026#34;rel\u0026#34;: \u0026#34;self\u0026#34;, \u0026#34;href\u0026#34;: \u0026#34;http://localhost:8080/super-shop/api/products/345\u0026#34; }, { \u0026#34;rel\u0026#34;: \u0026#34;details\u0026#34;, \u0026#34;href\u0026#34;: \u0026#34;http://localhost:8080/super-shop/api/products/345/details\u0026#34; }] } Great stuff- The Philosophers Stone, but there is no link to addToCart, maybe the item sold out, or is not for sale- either way, this action is not available. The client of the REST service discovers that fact by not having the link available to carry out the action. The application discloses its state via Hypermedia… Hypermedia becomes the engine that drives the Application State… Hypermedia as the Engine of Application State – HATEOAS.\nWith this simple explanation, let’s consider some implications of using HATEOAS and look at some misconceptions:\nHATEOAS reduces the configuration needed One thing that is great about HATEOAS is that it will reduce the need for configuring URL endpoints. All these URLs telling you how to look up product details? How to add a product to the shopping cart? You don’t need them hardcoded or in some configuration files. They are supplied by the application. If you really want to have something in your config files, you could place the rel– relationships there. In any application of reasonable complexity, you are likely to have dozens if not hundreds different REST API calls. This makes that benefit a very real one.\nHATEOAS can drastically reduce the amount of brittle configuration.\nHATEOAS promotes loose coupling It can be a considerable challenge to build an application based on REST services in such a way that changing API is easy. HATEOAS does not make it trivial (as you still can’t easily remove or change rel's) but makes it better than the status quo. You can change your URLs and their structure with relative ease- that is something.\nHATEOAS is not magic- you still need to know the API to code the interactions When reading about HATEOS you will see the discoverable API aspect highlighted quite a lot. You should really understand it at a conceptual level. The API is discoverable, but this is more relevant to a lack of hardcoded URLs rather than the services somehow figuring out which rel's they should interact with. The one aspect of this discoverability that is worth highlighting is that it makes understanding API by the developers programming the interactions easier. This sort of active documentation adds lots of value.\nIs there library support for HATEOAS? There are a few libraries that help with HATEOAS in the Java space. The popular ones include:\nSpring HATEOAS – the standard way of doing HATEOAS with Spring- waiting for the version 1.0 at the time of writing. Since I am a Spring aficionado, this is what I would recommend Jersey – being the reference implementation of JAX-RS provides HATEOAS support VRaptor – if you enjoy CDI beans, this MVC framework provides HATEOAS support explored by ZeroTurnaround in their blog post from 2014 Should I use HATEOAS for my next project? This is really for you to decide. As you can see it is a nice idea that improves important parts of the system. Loose Coupling and Open/Close principle are core system qualities worth the effort. On the other hand, it does add overhead and if you are building rather trivial, unlikely to change the system- the complications may outweigh the benefits. Hopefully, with a good understanding of the idea, you can make the best decision for your project.\n","permalink":"https://e4developer.com/posts/hateoas-simple-explanation/","summary":"\u003cp\u003eHATEOAS – Hypermedia as the Engine of Application State, a name long enough to intimidate and confuse. Behind this complicated name we have a rather simple and elegant idea. In this blog post, I explain what HATEOAS is and how it can be practically used to build more stable systems.\u003c/p\u003e\n\u003cp\u003eHATEOAS is a way of designing a REST API. More precisely it is a specific constraint of a REST architecture. It can be summed up with:\u003c/p\u003e","title":"HATEOAS - a simple explanation"},{"content":"GRASP stands for General responsibility assignment software patterns. You might have heard of it before, or you might not. Either way, you might not have thought about how these principles can potentially help when deciding your responsibilities assignments in a microservices architecture. Craig Larman in his book Applying UML and Patterns said that the “desert island skill”, the most important skill to have in Object Oriented Analysis/Design would be: *“to skillfully assign responsibilities to software objects”.*I think there is some truth to this when thinking about the most important microservices skill as well. Lets look at GRASP through the prism of microservices.\nGRASP principles/patterns are here to help us build better Object Oriented systems. The main goals of Object Oriented systems are:\nModularity Reusability If we think what we are usually trying to achieve with our microservices- these goals are very similar. We want a modular system with components that are loosely coupled and stand well on their own. These are well established patterns and principles that come from 40+ years of industry experience of building Object Oriented systems that stood the test of time. Given that, we should see if we can find some universal truths there that can help us deal with our design problems. I think I made my case here and we can start looking at the 9 patterns and how they can help us reason about microservices:\nThe Nine GRASP principles/patterns: Information Expert Question: What is a general principle of object design and responsibility assignment?\nAnswer: Assign a responsibility to the information expert– the class that has the information necessary to fulfill the responsibility.\nApplication to Microservices: This is quite a key pattern for OOA/D and it translates really well to the world of microservices. When we want to assign some responsibility, we should look which microservice already owns the data necessary. That will make everything easier. If we don’t need to worry about moving data around, then our job becomes easier. If it turns our that there are always multiple services required to gather the data to do anything- maybe that’s a sign that the microservices were not divided logically? This is one of the first thing you should think about when assigning responsibility to microservices.\nController Question: What first object beyond the UI layer receives and coordinates system operation?\nAnswer: Assign the responsibility to an object representing one of these choices:\nRepresents the overall system, a root object, a device that the software is running within, or a major subsystem (these are all variations of a façade controller) Represents a use case scenario within which the system operation occurs (a use-case or session controller) Application to Microservices: The façade controlleris the relevant one here. This is UI specific, but I think very relevant to microservices. It is really about dealing with a system completely external to the microservices architecture that you have some control over. If you are dealing with UI- yes you should have a Controller that the rest of the system hides behind. If there is some other arbitrary interface interacting with your system- you want to hide your system behind a Controller as well. It is all about hiding the system details and exposing a static interface. This is why we have Ingress in Kubernetes and Zuul in Spring Cloud (in the Netflix part).\nLow Coupling (evaluative) Question: How to reduce impact of change?\nAnswer: Assign responsibilities so that (unnecessary) coupling remains low. Use this principle to evaluate alternatives.\nApplication to Microservices: This is one of the main goals and hallmarks of good software/system design. Low coupling should drive most of our design decisions. Because if it does not, if you do not care for it, there is a large risk that you will end up with a distributed monolith. A system that pretends to be microservices, but has no benefits of a microservices system and all the difficulties and challenges. Low coupling can be viewed as one of a key measures of successful microservices implementation.\nHigh Cohesion (evaluative) Question: How to keep objects focused, understandable, manageable, and as a side-effect, support Low Coupling?\nAnswer: Assign responsibilities so that cohesion remains high. Use this to evaluate alternatives.\nApplication to Microservices: This is another key characteristic of a successful microservices architecture. I have read multiple blog posts criticizing microservices and choreography in general stating that there is just too much overhead with the communication! If you have so much overhead that your system starts to become unusable- your cohesion may be a bit too low. It often seems great to make these services really micro, but if we split them along wrong lines, if we split something that is meant to be together, we end up with extremely chatty services. If you have couple services that can’t seem to get anything done on their own- you may consider making them into a single service. I have written about this problem in my common technical debt in microservices blog post.\nPolymorphism Question: Who is responsible when behaviour varies by type?\nAnswer: When related alternatives or behaviours vary by type (class), assign responsibilities for the behaviour- using polymorphic operations- to the types for which the behaviour varies.\nApplication to Microservices: At first this does not seem to relate to microservices in any obvious way. But if you think about what Polymorphism is- it is providing a single interface to entities of a different type. But, how do you achieve polymorphism in your architecture? You want to get a different behavior depending on the type of object that you are dealing with. You want to make a call, but you want it to be handled differently depending on what that call is, without actually knowing how it resolves, without knowing which service picks it up. I think this is an excellent illustration of the value that Messaging/reactive services can bring. You publish the message, depending on the type of that message, different microservices may pick it up, filter it and do what is necessary, without the caller knowing\nCreator Question: Who creates? (use of Factory does not exclude creator)\nAnswer: Assign class B the responsibility to create an instance of class A if one of these is true:\nB contains A B aggregates A B has the initializing data for A B records A B closely uses A Application to Microservices: Here the connection is again not as clear as in some of the other patterns. This pattern talks specifically about creating instances of a class. How does that translate? In OOA/D classes usually correspond closely to objects in the domain model. In microservices, if you follow Domain Driven Design (or often just a common sense) you may also see these one to one relationship between domain objects and data stored in a specific microservice. This microservice that owns that data becomes the *“golden source”*of this information. You can think of it as the Creator of this object representation, when it instantiates it either as JSON Object, a message or in some other way. This does not hold as neatly, but the spirit of the pattern still makes sense. The service responsible for the data instantiation, the service, that owns the data, should be the one that fulfills some of the criteria mentioned… With the criteria: “B records A” being self fulfilling in this case.\nPure Fabrication Question: Who is responsible when you are desperate, and do not want to violate high cohesion and low coupling?\nAnswer: Assign a high cohesive set of responsibilities to an artefact or convenience “behaviour” class that does not represent a problem domain concept- something made up, in order to support high cohesion, low coupling, and reuse.\nApplication to Microservices: In similar fashion to the OOA/D, this is the voice of a pragmatism. If for some reason, you need to create a service that does not really correspond closely with your domain, you have a permission to do just that. Of course, you have the permission if this helps you achieved high cohesion, low coupling, reuse- all noble goals. Plenty of the supporting services, such as message brokers, cache providers, request-routers do not correspond to the domain model, but help the system achieve these good characteristics. This is the permission to use them.\nIndirection Question: How to assign responsibilities to avoid direct coupling?\nAnswer: Assign responsibility to an intermediate object to mediate between other components or services, so that they are not directly coupled.\nApplication to Microservices: This principle relates to microservices very directly. It can be translate as- if your services are coupled, use another component in between to decouple them. This sounds like a job for a message queue! Of course you could argue that an API gateway fulfills a similar goal. With a message broker that coupling is even loser as the only known thing between the services is location of the broker and expected message format/channel. Adding and removing services to the interaction can be pretty seamless. It is worth thinking about this next time we integrate our services.\nProtected Variations Question: How to assign responsibilities to objects, subsystems, and systems so that the variations or instability in these elements do not have an undesirable impact on other elements?\nAnswer: Identify points of predicted variations or instability; assign responsibilities to create a stable “interface” around them.\nApplication to Microservices: This is absolutely key to any software design effort. Protecting variation is what we are trying to achieve when designing APIs, when thinking about our domain model, when writing the services themselves. This is a key principle for OOA/D, microservices and good software engineering in general. So, how does is specifically relates to microservices?:\nStable APIs that do not need to change Using message queues as a preferred mode of communication- easy to add another services Data model that is open for extension, but closed for change Core services having limited responsibilities so they do not grow beyond their logical scope You could say that the whole microservices pattern come from the idea of Protected Variations. You can actually finish writing a microservice and have it be stable despite everything else changing in the system. This is not quite the case with a monolith, as it is the system- it always changes.\nConclusion Some of the GRASP principles relate to microservices more clearly than others. Overall, these are founded in solid Software Engineering practices, so no surprise here that they are relevant. What did surprise me however, was reflecting on these principles and realizing how strongly they seem to favor choreography as a preferred mode of integration. Indirection, Polymorphism, Protected Variation- the ideas expressed by these patterns/principles are very much aligned with the goals of Choreographed microservices architecture.\n","permalink":"https://e4developer.com/posts/application-of-grasp-to-microservices/","summary":"\u003cp\u003eGRASP stands for General responsibility assignment software patterns. You might have heard of it before, or you might not. Either way, you might not have thought about how these principles can potentially help when deciding your responsibilities assignments in a microservices architecture. Craig Larman in his book \u003ca href=\"http://amzn.to/2EykJaH\"\u003eApplying UML and Patterns\u003c/a\u003e said that the \u003cem\u003e“desert island skill”\u003c/em\u003e, the most important skill to have in Object Oriented Analysis/Design would be: *“to skillfully assign responsibilities to software objects”.*I think there is some truth to this when thinking about the most important microservices skill as well. Lets look at GRASP through the prism of microservices.\u003c/p\u003e","title":"Application of GRASP to Microservices"},{"content":"Working for a consultancy I have the opportunity to talk to large number of software developers across multiple different projects. That gives me an opportunity to see what works and what common problems different organizations are facing. In this article I will show the most common mistakes in microservices that cause technical debt. I suggest what you could be doing instead.\nMicroservices development can be difficult. Given how new the pattern is, it is no surprise that mistakes are made. The great thing about mistakes is that they are also learning opportunities. It is great when you can learn from your mistakes, but it is even better when you can learn from the mistakes of others!\nBefore going into specific examples, there are two important things to point out about technical debt:\nSometimes it is desired to have some technical debt. If we are building a proof of concept- we may not want to go with production grade infrastructure that we end up deleting. When we are racing to finish some feature that will give our product a killer market advantage- another case when the technical debt can be accepted. High performing teams can use technical debt to their advantage. It is important to realize when you are taking on technical debt. The problem is when it is done blindly and the team only realizes when it is too late. This, the unexpected technical debt, is usually the worst and the most dangerous kind. Either kind (the planned and unplanned) should be recorded by the team so that if there is time for some tech debt work- it can be quickly identified and worked on.\nTechnical debt is not just like financial debt- there is more risk involved. When dealing with financial debt, there are predictable payments involved. Most stock market listed companies want to take on some debt so that they can grow faster- this is called leveraging the business. When we deal with technical debt, the payments, so to speak- can vary dramatically. Something may require a few minutes to fix, but potentially make the system fail in production. That risk factor being decoupled from the size of the debt is something quite a bit different than with financial debt. When looking at technical debt, and especially when discussing it with your more business minded colleagues- make sure to highlight and focus on the risks! Work required is important, but risks are the key to prioritization.\nWith that out of the way, what are the most common causes of technical debt that appear in microservices?\nLeading causes of technical debt in microservices Microservices Configuration – Done Badly This seems to be by far the most common problem! Many enterprises are not really used to the modern way of dealing with configuration files.\nProblem: Common way to get configuration management wrong is to put specific configuration files for each environment into every microservice. As the number of microservices and environments grows, this starts to grow in the order O(N^2). Before you know it, you need to update 30 properties across 10 microservices (and Git repos) changing 3 configuration files in each. This is not sustainable, causes multiple hours wasted and frustrating bugs.\nSolution: First, you don’t need environment specific configuration files. This can be handled by Configuration Servers (for example Spring Cloud Config) and using environment variables. You could make use of Kubernetes Secrets, Environment variables, anything really that stops you from having to manually include service endpoints, database configurations and other similar things in every microservice for each environment. Service discovery tools such as Eureka or Consul can help to make it even better.\nExistence of a God Library Developers like libraries. Why wouldn’t we? I don’t know many people who enjoy re-typing or copy-pasting code all over the place. We should put shared code into libraries. The plural form here is the clue…\n**Problem:**A library gets created- something called along the lines: microservices-utilities or project-standard-library that before you know it includes all the potentially shared code in the project. Security, dealing with message queues, some ad hoc business domain logic and more ends up bundled in one library. Now, all your microservices depend on huge number of often unnecessary things and may be impacted in the most unusual ways. Sometimes upgrades to library versions will affect services that should not be affected.\n**Solution:**It is simple to deal with this problem if you can see it early enough. Don’t create the *God Library!*Instead use multiple different libraries that have well defined boundaries. This is a common technique in software development and it can be compared to Interface Segregation from the SOLID principles. Despite targeting Object Oriented Development, they translate quite well to the world of microservices.\nPoorly implemented security Securing microservices is more difficult than securing a monolithic application. That does not mean that it is less important!\n**Problem:**Securing microservices can be inherently difficult. There are more services that often manipulate with data and can be called in number of ways. With more vectors of attack and more work to do to implement security, we naturally see less of it… This is one of these technical debts that I discussed early- the very risky kind. Some may say technical debt, others may say non-functional requirement. The bottom line is- the risk is huge and so are the vulnerabilities that we often see.\n**Solution:**Take security seriously. Think of it from the start. Do not try to build your own security. If you are dealing with Spring Cloud application, get familiar with Spring Cloud Security and no matter what you are dealing with, get familiar with OAuth2 and Open Web Application Security Project (OWASP).\nHighly coupled services We are often seeing cases where two services, despite being separate, seem to be completely dependent on each other.\n**Problem:**When two microservices closely depend on each other, the benefits from loosely coupled services are lost. This often makes the code difficult to write and follow (as the business logic spans the two services) as well as compromises the performance. The problem often gets worse with time rather than improves.\n**Solution:**If two microservices closely depend on each other it could be a sign that these should really be a single service. Sometimes the decision what to split into multiple services and what to keep together is done a bit arbitrarily. If the decision does not make sense from the domain perspective, it will be nearly impossible to keep these services loosely coupled. Microservices should be as small as it is practical, no smaller for the sake of their size!\nDeployment is separated from the developers When talking about the technical debt and its causes it is useful to look beyond the process of writing code. How the team interacts with the rest of organisation is also very important. It can make work great or cause countless issues.\n**Problem:**Some companies that did not fully adopt the devops mindset do whatever they can do separate the developers from the deployment and the operational aspects of the solution. This often results in misunderstanding, mistakes in configuration and both groups of people spending a lot of time on what other group will solve quickly.\n**Solution:**Do everything you can to get the operations and development as close as possible. Sitting closely (co-location, from the business dictionary), having good communication tools (I recommend Slack) and getting people feel that they are part of the same team can help. There is a great book written on the topic: Devops Handbook, that I really recommend if you want to know more.\nPoorly implemented APIs for Orchestration Orchestration is the most popular method of integrating microservices. There is nothing wrong with making REST calls using JSON seams easy and natural. The technical part of it is usually not the problem- the APIs that are being built are.\n**Problem:**Microservices APIs are often created in a chaotic fashion. When implementing a new feature, developer takes required services and adds APIs as she sees fit. This can work, but without any standards agreed upfront or general API design, it often leads to a fragmented API that constantly changes.\n**Solution:**Having a stable, well crafted API makes working with Microservices easier. Often the only thing necessary is agreeing some rules about what sort of things should be returned and how methods should be named. It is tricky to recommend a one-size fits all approach here. Your team needs to decide what works best for your project and start following these agreed rules. One thing I found quite useful is to share Postman configurations that make it easy to discover and try APIs.\nAvoiding Choreography at all cost As mentioned above- orchestration is the most popular method of integrating microservices. Orchestration is great, but it is not always the best approach…\n**Problem:**Some design problems are solved much easier with messaging than direct calls. If you are having long running business transactions (spanning minutes, hours, days) managing them with databases and REST calls can be much more cumbersome than doing the same with message queues.\n**Solution:**Introduce a messaging solution to your microservices architecture. It should be a tool that your team has at their disposal. Not sure where to start? Have a look at RabbitMQ. I also have a choreography category on this blog that may interest you.\nWhat can we do to get better? These are some of the most popular causes of technical debt in microservices that I have encountered. These are not all the problems, but if you manage to eliminate all of them, you are definitely above average with your microservices oriented system. This is a constantly evolving space. The worst thing you could do is to stop learning. If your team chose the path of microservices, then you are on the path of learning. Read articles, experiment and enjoy the journey!\n","permalink":"https://e4developer.com/posts/common-technical-debt-in-microservices/","summary":"\u003cp\u003eWorking for a consultancy I have the opportunity to talk to large number of software developers across multiple different projects. That gives me an opportunity to see what works and what common problems different organizations are facing. In this article I will show the most common mistakes in microservices that cause technical debt. I suggest what you could be doing instead.\u003c/p\u003e\n\u003cp\u003eMicroservices development can be difficult. Given how new the pattern is, it is no surprise that mistakes are made. The great thing about mistakes is that they are also learning opportunities. It is great when you can learn from your mistakes, but it is even better when you can learn from the mistakes of others!\u003c/p\u003e","title":"Common Technical Debt in Microservices"},{"content":"One of the challenges in building distributed system is having a good visibility of what is happening inside them. This challenge is only magnified when dealing with choreography- microservices, loosely coupled, communicating via messaging. In this article you will see how Sleuth and Zipkin help to solve that problem.\nOne of the most important requirements for production ready microservices is being able to correlate logs. What does that mean? Having some sort of id, that will link logs from different services together. Of course you don’t want to link everything- you want to focus on a single request/process that is happening in the system. This was often done with MDC (Mapped Diagnostic Context) in slf4j. There is nothing wrong in using these technologies directly, but here I want to show you something better…\nMeet Spring Cloud Sleuth Spring Cloud Sleuth is a project designed to make tracing requests in microservices easy. It succeeds spectacularly in that goal. If you are using Spring Boot (and you should!) enabling Sleuth only requires adding a single dependency:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-starter-sleuth\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; After adding this dependency, requests to your microservices will be traced. To see more of that tracing, you need to add the following to your application config:\nlogging.level.org.springframework.cloud.sleuth=DEBUG\nAfter enabling that, you should start seeing some new, interesting logs in your services:\n2018-02-08 22:30:16.431 DEBUG [food-order-publisher,,,] 12572 --- [nio-8080-exec-7] o.s.c.sleuth.instrument.web.TraceFilter : Received a request to uri [/order] that should not be sampled [false] 2018-02-08 22:30:16.454 DEBUG [food-order-publisher,888114b702f9c3aa,888114b702f9c3aa,true] 12572 --- [nio-8080-exec-7] o.s.c.sleuth.instrument.web.TraceFilter : No parent span present - creating a new span 2018-02-08 22:30:16.456 DEBUG [food-order-publisher,888114b702f9c3aa,888114b702f9c3aa,true] 12572 --- [nio-8080-exec-7] o.s.c.s.i.web.TraceHandlerInterceptor : Handling span [Trace: 888114b702f9c3aa, Span: 888114b702f9c3aa, Parent: null, exportable:true] 2018-02-08 22:30:16.457 DEBUG [food-order-publisher,888114b702f9c3aa,888114b702f9c3aa,true] 12572 --- [nio-8080-exec-7] o.s.c.s.i.web.TraceHandlerInterceptor : Adding a method tag with value [orderFood] to a span [Trace: 888114b702f9c3aa, Span: 888114b702f9c3aa, Parent: null, exportable:true] I am using here https://github.com/bjedrzejewski/food-order-publisher project as an example. If you are interested how messaging works and in Spring Cloud Stream, check my earlier post about it. There is another blog post that explains error handling used in the code that we will use here. Now, assuming you have the basics, lets look closer at the log that is being created:\n[food-order-publisher,888114b702f9c3aa,888114b702f9c3aa,true]\nWhat you are seeing there, are the respective parameters:\nappname – the name of the application that logged the span traceId – the id of the latency graph that contains the span spanId – the id of a specific operation exportable – whether the log should be exported to Zipkin or not (more about Zipkin later) Here, the traceId is the same as spanId, because this is the beginning of a trace:\n2018-02-08 22:30:16.454 DEBUG [food-order-publisher,888114b702f9c3aa,888114b702f9c3aa,true] 12572 --- [nio-8080-exec-7] o.s.c.sleuth.instrument.web.TraceFilter : No parent span present - creating a new span You can think of traceId as sort of a parent of spanId. I could have not describe it better than the official documentation does with the following picture:\nAs you can see in that picture above, traceId always stay the same for the whole time, while spanId creates the sort of black-box. It runs from request to response.\nThese traceId and spanId propagate through REST calls automatically. You really don’t need to do anything special and you will see the same traceId across multiple Spring Boot servers- as long as they have Spring Cloud Sleuth of course. They are also automatically created for number of different interactions, including interacting with data sources and messages.\nJust with that, you suddenly have a power to trace your logs expertly. If you add Logstash, Elastic, Kibana- you can then easily filter by traceId and build up a holistic view of the system. It is incredible how much you get with Sleuth with such a little effort. But wait, there is more…\nMeet Zipkin Zipkin is a project whose main use for us is to visualize these traces that you have collected with Sleuth. Zipkin Server used to be part of a Spring Cloud (done by annotation placed on a Spring Boot), but currently is a standalone project. Since I am quite a big fan of docker, I recommend you running Zipkin server with the following command (provieded you have a Docker installed):\ndocker run -d -p 9411:9411 --name zipkin openzipkin/zipkin\nIt runs by default on the port 9411, but this can be changed by passing different environment variables. If you are not keen on docker, you can run Zipkin Server in multiple different ways as listed by their official Quickstart. After starting Zipkin server and visiting port 9411 on your localhost you should see something like that:\nTo make use of this brand new Zipkin Server, we need to tell Spring Boot to actually use it. To do this, you can replace Sleuth dependency with Zipkin dependency (Zipkin includes Sleuth), pasting the following into your pom file:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-starter-zipkin\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; You need to tell your services how to connect to the Zipkin Server and this is done with the fairly self-explanatory set of properties:\nspring.zipkin.service.name=food-order-consumer\nspring.zipkin.sender.type=web\nspring.zipkin.baseUrl=http://localhost:9411\nspring.sleuth.sampler.percentage=1.0\nHere the sender type is set to web as we want to report data to Zipkin via HTTP calls rather than a message queue (RabbitMQ for example is another option). sampler.percentage defines how many traces with be sent to Zipkin. Default is 0.1 which means 10%, here for the demo purposes I decided for 1.0- 100%.\nExample output from working Zipkin should look something like that:\nSleuth, Zipkin and Spring Cloud Stream working together – Example After discussing this technologies, I will show you how seamlessly they are working together. For this demonstration I will use the code that I created for the previous two blog post on Spring Cloud Stream (starting with Spring Cloud Stream and dead letter queue in Spring Cloud Stream). The finished code for the three projects used can be found on my github in these three repositories:\nfood-order-publisher – dealing with publishing messages food-order-consumer – dealing with consuming messages food-order-dlq-processor – dealing with the dead letter queue – exceptions Step 1: Adding Sleuth and Zipkin to the Message Publisher This is very simple here. All that is needed is adding the properties and relevant dependency as discussed earlier:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-starter-zipkin\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; spring.zipkin.service.name=food-order-publisher\nspring.zipkin.sender.type=web\nspring.zipkin.baseUrl=http://localhost:9411\nspring.sleuth.sampler.percentage=1.0\nWhen making REST call to the service that publishes message on the queue I can see now everything being tracked by Zipkin:\nStep 2: Adding Sleuth and Zipkin to the Message Consumer This follows the same pattern, I need to add the same maven dependency and set the properties correctly:\nspring.zipkin.service.name=food-order-consumer\nspring.zipkin.sender.type=web\nspring.zipkin.baseUrl=http://localhost:9411\nspring.sleuth.sampler.percentage=1.0\nWhen making a REST call to the publisher, now I should be able to see the interaction between the two services:\nStep 3: Adding Sleuth and Zipkin to the DLQ Handler This is where it gets a little difficult. My DLQ handler does not use Spring Cloud for handling the messages, but rather it has its own RabbitMQ connection. In order to get that connected into the span I have to add the Zipkin maven dependency and the standard set of properties:\nspring.zipkin.service.name=food-order-dlq-processor\nspring.zipkin.sender.type=web\nspring.zipkin.baseUrl=http://localhost:9411\nspring.sleuth.sampler.percentage=1.0\nAnd I need to manually join the existing Span:\n@Autowired Tracer tracer; @RabbitListener(queues = DLQ) public void rePublish(Message failedMessage) { HeaderBasedMessagingExtractor headerBasedMessagingExtractor = new HeaderBasedMessagingExtractor(); MySpanTextMap entries = new MySpanTextMap(failedMessage.getMessageProperties().getHeaders()); Span span = headerBasedMessagingExtractor.joinTrace(entries); Span mySpan = tracer.createSpan(\u0026#34;:rePublish\u0026#34;, span); failedMessage = attemptToRepair(failedMessage); Integer retriesHeader = (Integer) failedMessage.getMessageProperties().getHeaders().get(X_RETRIES_HEADER); if (retriesHeader == null) { retriesHeader = Integer.valueOf(0); } if (retriesHeader \u0026lt; 3) { failedMessage.getMessageProperties().getHeaders().put(X_RETRIES_HEADER, retriesHeader + 1); this.rabbitTemplate.send(ORIGINAL_QUEUE, failedMessage); } else { System.out.println(\u0026#34;Writing to databse: \u0026#34;+failedMessage.toString()); //we can write to a database or move to a parking lot queue } tracer.close(mySpan); } The whole file can be seen here. This is a temporary workaround. I wanted to demonstrate the ability to arbitrarily add to the Span, as this may be not the only occasion when you may need to do this. The crucial parts here are:\n//Manually extracting the Span properties from the message and using //HeaderBasedMessagingExtractor\u0026amp;nbsp; from Spring to create the Span //(this could be done manually) HeaderBasedMessagingExtractor headerBasedMessagingExtractor = new HeaderBasedMessagingExtractor(); MySpanTextMap entries = new MySpanTextMap(failedMessage.getMessageProperties().getHeaders()); Span span = headerBasedMessagingExtractor.joinTrace(entries); //using the manually created Span to add it to the tracer Span mySpan = tracer.createSpan(\u0026#34;:rePublish\u0026#34;, span); //closing the Span tracer.close(mySpan); To do it in a cleaner fashion you should make use of Spring Aspect Oriented Programming (AOP) capabilities, but this is beyond the scope of this blog post. If you want to know the details I recommend reading the Customisation chapter of the official documentation that explains it in more details. People involved in the Sleuth and Zipkin projects are actively working on adding new automated tracing to these projects. There is a good chance that by the time you read it, if you use the latest versions of the respective libraries, you won’t have to do it manually.\nLets make a few calls that will fail directly and make use of the DLQ handler. You will see how much easier to understand the flow is when you have a good visualization.\nYou can even get the details of the Exceptions by clicking on the spans:\nAs you can see this is a truly useful tool when investigating Exceptions and understanding different flows in your choreography.\nSummary and what to do next I consider Sleuth an invaluable addition to any serious microservices built around the Spring Cloud project. You don’t need to use Zipkin, but with the ease of integration I don’t see why you wouldn’t want to! Once you have your tracing figured out, it is very important to be able to easily search through your logs. To deal with this I recommend getting familiar with the ELK stack- Elastic Search, Logstash and Kibana. Together with Sleuth and Zipkin they give you the ultimate insight into your logs and microservices communication!\n","permalink":"https://e4developer.com/posts/tracing-messages-in-choreography-with-sleuth-and-zipkin/","summary":"\u003cp\u003eOne of the challenges in building distributed system is having a good visibility of what is happening inside them. This challenge is only magnified when dealing with choreography- microservices, loosely coupled, communicating via messaging. In this article you will see how Sleuth and Zipkin help to solve that problem.\u003c/p\u003e\n\u003cp\u003eOne of the most important requirements for production ready microservices is being able to correlate logs. What does that mean? Having some sort of id, that will link logs from different services together. Of course you don’t want to link everything- you want to focus on a single request/process that is happening in the system. This was often done with MDC (Mapped Diagnostic Context) in slf4j. There is nothing wrong in using these technologies directly, but here I want to show you something better…\u003c/p\u003e","title":"Tracing messages in Choreography with Sleuth and Zipkin"},{"content":"When dealing with messaging in a distributed system, it is crucial to have a good method of handling bad messages. In complicated systems, messages that are either wrong, or general failures when consuming messages are unavoidable. See how you can deal with this problem using Dead Letter Queues, RabbitMQ and Spring Boot Cloud.\nWhen dealing with messages in distributed systems it is important to know when things go wrong. When your services simply call one another it often is quite trivial- if your call failed, you know that you have a problem! With messaging it is often not so clear- as a service, if you successfully published a message on a queue- your responsibility ends. Whose responsibility is it then to ensure that the message published was correct and if not, that something will be done about it? Here, with the spirit of *“smart pipes”*we assume that it is the brokers responsibility to provide this service…\nIntroducing Dead Letter Queue Dead Letter Queue is a queue dedicated to storing messages that went wrong. A list of what is meant by ‘went wrong’ is handily provided by Wikipedia:\nMessage that is sent to a queue that does not exist. Queue length limit exceeded. Message length limit exceeded. Message is rejected by another queue exchange. Message reaches a threshold read counter number, because it is not consumed. Sometimes this is called a “back out queue”. Here we are going to look at the last case. This roughly translates to- if a specified message fails to be consumed by a services specified number of times, this message should be moved to the Dead Letter Queue (often referred as DLQ). The good news is- with RabbitMQ and Spring Cloud Stream it is very easy.\nDead Letter Queue with RabbitMQ and Spring Cloud Stream I will assume here that you know basics of Spring Cloud Stream and RabbitMQ. If you want to refresh your memory, you can check my earlier blog post on integrating RabbitMQ with Spring Cloud Stream. The basic idea here is that it is all very easy. Introducing DLQ to your project continues on that same trend. It took me some time to figure that out from the documentation, but all you really have to do is to add spring.cloud.stream.rabbit.bindings.YOUR_CHANNEL_NAME.consumer.autoBindDlq=true to your consumer project properties. There is one small catch- if you already have existing queue, you may need to delete it in order for your project to re-create the queue correctly. After adding this property, every time you have an exception in your consumer code (message ends up not being acknowledged) the message will be put on the DLQ. You could also have that scenario when using manual acknowledgements (which are a configuration option). Here, I am using the following code to simulate these exceptions:\npackage com.e4developer.foodorderconsumer; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.cloud.stream.annotation.EnableBinding; import org.springframework.cloud.stream.annotation.StreamListener; import org.springframework.cloud.stream.messaging.Sink; @EnableBinding(Sink.class) @SpringBootApplication public class FoodOrderConsumerApplication { public static void main(String[] args) { SpringApplication.run(FoodOrderConsumerApplication.class, args); } @StreamListener(target = Sink.INPUT) public void processCheapMeals(String meal) throws Exception { if(meal.contains(\u0026#34;vegetables\u0026#34;)) throw new Exception(\u0026#34;Vegetables! Move to dead letter queue!\u0026#34;); if(meal.contains(\u0026#34;poison\u0026#34;)) throw new Exception(\u0026#34;Poison! Move to dead letter queue!\u0026#34;); System.out.println(\u0026#34;Meal consumed: \u0026#34;+meal); } } This consuming code is available in the food-order-consumer github repo. There is also accompanying food-order-publisher github repo if you want to run the whole example.\nNow, after sending the following JSON payload to the publisher:\n{ \u0026#34;restaurant\u0026#34;: \u0026#34;Fancy Feast\u0026#34;, \u0026#34;customerAddress\u0026#34;: \u0026#34;Buckingham\u0026#34;, \u0026#34;orderDescription\u0026#34;: \u0026#34;Tasty vegetables and coffee\u0026#34; } I get to see an Exception Stack trace in the consumer and the message appear in the automatically created DLQ:\nThere is much more configuration available when working with the DLQ messages. You can change names, add custom routing, specify number of re-tries etc. To find more about it, the official API is the best place to look.\nProcessing Messages from the Dead Letter Queue You know how to put messages on the DLQ, now it will be good to understand how to get out of there. Well, even though Spring Cloud is quite opinionated on how to deal with most things it can’t really tell you what to do with your bad messages. DLQ is just a queue after all and that’s how it should be treated. You can take the messages manually from the queue, make a consumer that pushes it to a database or attempt to reprocess/repair the messages with another service. What follows is an example code that takes the messages from DLQ and simulates repair or storage for further examination:\npackage com.e4developer.foodorderdlqprocessor; import org.springframework.amqp.core.Message; import org.springframework.amqp.core.MessageBuilder; import org.springframework.amqp.rabbit.annotation.RabbitListener; import org.springframework.amqp.rabbit.core.RabbitTemplate; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class FoodOrderDlqProcessorApplication { private static final String ORIGINAL_QUEUE = \u0026#34;foodOrders.foodOrdersIntakeGroup\u0026#34;; private static final String DLQ = ORIGINAL_QUEUE + \u0026#34;.dlq\u0026#34;; private static final String X_RETRIES_HEADER = \u0026#34;x-retries\u0026#34;; public static void main(String[] args) { SpringApplication.run(FoodOrderDlqProcessorApplication.class, args); } @Autowired private RabbitTemplate rabbitTemplate; @RabbitListener(queues = DLQ) public void rePublish(Message failedMessage) { failedMessage = attemptToRepair(failedMessage); Integer retriesHeader = (Integer) failedMessage.getMessageProperties().getHeaders().get(X_RETRIES_HEADER); if (retriesHeader == null) { retriesHeader = Integer.valueOf(0); } if (retriesHeader \u0026lt; 3) { failedMessage.getMessageProperties().getHeaders().put(X_RETRIES_HEADER, retriesHeader + 1); this.rabbitTemplate.send(ORIGINAL_QUEUE, failedMessage); } else { System.out.println(\u0026#34;Writing to databse: \u0026#34;+failedMessage.toString()); //we can write to a database or move to a parking lot queue } } private Message attemptToRepair(Message failedMessage) { String messageBody = new String(failedMessage.getBody()); if(messageBody.contains(\u0026#34;vegetables\u0026#34;)) { System.out.println(\u0026#34;Repairing message: \u0026#34;+failedMessage.toString()); messageBody = messageBody.replace(\u0026#34;vegetables\u0026#34;, \u0026#34;cakes\u0026#34;); return MessageBuilder.withBody(messageBody.getBytes()).copyHeaders(failedMessage.getMessageProperties().getHeaders()).build(); } return failedMessage; } } What I think is most interesting here is the code that attempts to repair the message. If it is possible to repair the message, then the message can be requeued and processed successfully by the original, intended consumer. By looking at the headers, the custom re-try logic is added. What you can also see is the break out from the standard Spring Cloud Stream processing, as more bespoke handling of the message is required.\nThis service code is also shared on github. It is heavily inspired by the approach from the official documentation that is worth checking out.\nConclusion Dead Letter Queue is an important pattern that you should be familiar with. Spring Cloud Stream together with RabbitMQ make it rather easy to get started, but if you want to start repairing messages- a tailored approach needs to be taken. With these techniques you should be in a good position to deal with bad messages.\n","permalink":"https://e4developer.com/posts/handling-bad-messages-with-rabbitmq-and-spring-cloud-stream/","summary":"\u003cp\u003eWhen dealing with messaging in a distributed system, it is crucial to have a good method of handling bad messages. In complicated systems, messages that are either wrong, or general failures when consuming messages are unavoidable. See how you can deal with this problem using Dead Letter Queues, RabbitMQ and Spring Boot Cloud.\u003c/p\u003e\n\u003cp\u003eWhen dealing with messages in distributed systems it is important to know when things go wrong. When your services simply call one another it often is quite trivial- if your call failed, you know that you have a problem! With messaging it is often not so clear- as a service, if you successfully published a message on a queue- your responsibility ends. Whose responsibility is it then to ensure that the message published was correct and if not, that something will be done about it? Here, with the spirit of *“smart pipes”*we assume that it is the brokers responsibility to provide this service…\u003c/p\u003e","title":"Handling bad messages with RabbitMQ and Spring Cloud Stream"},{"content":"Microservices have won major following from technology enthusiasts all over the world. A couple weeks ago I was giving an introduction to Spring Cloud at one of Scott Logic breakfast technical talks (techie brekkies). All the developers in the room were interested and could see number of benefits and challenges with this, still rather new, approach. At the end of the session, during the question time I was asked by one of the business people in the room- “So, what is the business case for this, what business problem does it solve?”. It is easy to provide a few answers about scalability and maintainability of microservices, but I don’t think that speaks clearly to our business people. In this article I will look deeper into the business benefits provided by investing into microservices architecture.\nBefore going right into benefits and the business case, a word of caution. Microservices can be done wrong, IT projects often fail. If anything, microservices can be more complicated than the more traditional approaches. In this article I will not be examining this problem too much, and assume that after due diligence, microservices approach was suggested as a viable option by the engineers. So, given that you can chose microservices and they are recommended as a reasonable approach, what benefits for business do you get by choosing this new approach?\nMicroservices architecture is build around the open source proposition The first and quite a big benefit is that this is all free! Most microservices frameworks (like Spring Cloud for example) are build around Open Source platforms and ideology. I don’t think I need to go on explaining here why Open Source is great for everyone, including businesses. Moreover, this is not the kind of Open Source that nobody maintains and there is no Enterprise Support available if there is a need for it. Companies like Pivotal (creators of Spring Cloud) or Lightbend (responsible for Lagom) provide Enterprise grade support for their product. There are many others if you chose different frameworks/platforms.\nI am sure that no one enjoys paying huge amount of money for complicated application servers and databases that are supposed to make all our problems disappear. The great news is that you really don’t need that anymore. Microservices are not only build on Open Source, but they are also proven at an enterprise and Internet scale.\nIf you actually have budget to spend some money regardless, it is better to utilize it into hiring more people instead or investing in training your own workforce. Microservices are still quite new and not everyone has knowledge or experience in this area. If you already have microservices ready people, you could invest in development tools or more modern workstations instead- now you have the options!\nMicroservices enable business to better utilize infrastructure and save money This business benefit is also all about money… But there is more to it than most people realize at first. There is of course a lot of money to be saved from not spending money on servers that sit idle all the time, only being utilized to their maximum for 10% of running time or less. With microservices being much more light-weight and scaling usually being much easier, there is no need to provision so much power up-front. As the service is seeing more use- more instances can be created. Simple as that.\nWhen dealing with traditional monoliths, there is another problem- often this rapid scalability is not available. Here, the loss of money can be even greater! Imagine your software has a sudden spike in demand and you not only can’t deal with all this extra business (and extra profit), but potentially your whole solution stops performing. Here, a product/marketing success may turn into business disaster with even the basic levels of service not available. Instead of doubling the profit, you stand at a risk of losing everything and ending up with a tarnished reputation.\nThis automatic scalability often depends on having cloud-deployment capabilities. If you are not deployed on the cloud and can’t rapidly provision new hardware, you may still benefit from it. Because of the way microservices fraction the whole system, you could identify the underutilized parts of the system and scale them down, freeing the capacity for the service under stress.\nMicroservices help to achieve the agility that is a goal for so many organizations When talking here about agility I mean more than simply running the agile development process (although this is a part of it). For many organizations, being able to dynamically respond to changes in the marketplace and implement ideas faster than competition is absolutely crucial for their long term survival. Lets have a look how microservices help foster these two types of agility:\nDevelopment Agility: Running development teams in an agile fashion is good for business, no questions about that. It is also incredibly hard to have any sort of development agility when 80 people work on the same software component. This is how it often looks when dealing with monoliths. Sure, it can work, but how wasteful can it be! We have all seen smaller teams achieve feats that were impossible to achieve by teams multiple time the size. It is enough to look at multiple startups that regularly disrupt different businesses with their new products. How can you have the best of both worlds? Assuming that you can actually afford amazing 80 people to work on your project- split them into smaller teams (lets say 10 teams of 8 people) and give them full responsibility of implementing their own service (or services). If you can manage this well (and it is not that easy!) you may have the power of multiple startups working in harmony with a strong business behind them. It seems that nothing should be impossible with this setup! Product Agility: Innovating and introducing new products and offerings is key to a long term success in business. I have seen multiple companies built around a large product that is getting older, larger and more difficult to change every year. It can be extremely dangerous for a company to find themselves in such position. With microservices you leave yourself a way out. Changing a 1,000,000+ lines of code sized monolith can become near impossible. Changing a microservice that has well defined contracts- now, this is something that can be done! With microservices it is easier to continue changing and innovating, as the coupling between your services is genuinely low. From the strategic perspective for a company, this should be an important point. Microservices help to recruit and retain talent Companies are built by people. Usually, the more motivated, talented and experienced your people are- the better the business outcomes. Of course there are other factors at play, but a great company that loses all good people often ends up in troubles and a troubled company that has amazing staff, usually lives to fight another day. Why would you not want the best people you can get? Here are the reasons why microservices attract good engineers:\nIndependent services mean that developers often have a real impact and responsibility with the work they produce They are usually making use of a modern technology that is pleasant to work with The code produced is often cleaner and easier to work with Devops culture is a great and rewarding way of working- it usually comes with microservices Other industry leaders are already doing this and good people attract more good people There is always more to learn- this is a very active space that is already good and mature and is only getting better Beyond these points, if you are going to introduce microservices, make sure that your working-culture is devops oriented. Without the devops culture, which I understand by teams being responsible for the service all the way from source-code to production deployment, you may find it difficult to succeed. The independence and responsibility are not really optional here, they are a requirement for high performing microservices teams. To really understand what I am talking about here, have a look at The Devops Handbook.\nIn short, if you are recruiting people and you mention that your company is doing microservices, prepare for a lot of interest. Also, be ready to answer some technical questions as the curiosity of your candidates spike!\nSome of the largest companies out there use them, they are likely the future The last important point here is that this is not a proof of concept. By going with microservices, you are going in the direction the industry leaders are taking- Netflix, Ebay, Amazon- to name just a few. Some of the world most successful companies are going this way. Super successful startups, and established brands alike. By choosing microservices you are not embarking on a lonely journey, but rather join the group of industry leaders pushing further how fast and how well products and services can be build and maintained.\nThis is a sensible choice, but more than that- it is a future proof choice. If different technologies emerge and new trends in Software Architecture become the norm, you are in a position to take advantage of them. As mentioned earlier, with microservices you leave yourself an option to always change. This is a rare opportunity for a company to both chose something that is modern, but also proved all over the world on different scales.\nSummary Be under no illusion- implementing microservices may not be easy. This article does not claim that. Software development is never easy in the real world. What this article is trying to do is to showcase why businesses should take a good look at microservices when considering where they want to go strategically. We are still in relatively early days, but microservices are already mainstream. Not everyone is reaping the benefits of this new paradigm. If you are given a chance to steer your company in this direction, if you have people ready to embark on the challenge- the rewards are great.\n","permalink":"https://e4developer.com/posts/the-business-case-for-microservices/","summary":"\u003cp\u003eMicroservices have won major following from technology enthusiasts all over the world. A couple weeks ago I was giving an introduction to Spring Cloud at one of \u003ca href=\"http://www.scottlogic.com\"\u003eScott Logic\u003c/a\u003e breakfast technical talks (techie brekkies). All the developers in the room were interested and could see number of benefits and challenges with this, still rather new, approach. At the end of the session, during the question time I was asked by one of the business people in the room- \u003cem\u003e“So, what is the business case for this, what business problem does it solve?”\u003c/em\u003e. It is easy to provide a few answers about scalability and maintainability of microservices, but I don’t think that speaks clearly to our business people. In this article I will look deeper into the business benefits provided by investing into microservices architecture.\u003c/p\u003e","title":"The business case for Microservices"},{"content":"With Spring Boot 2.0 release just around the corner (at the time of writing we have RC1) it is important to see what changes it brings. Even if you are not planning to migrate shortly, it is good to see what is new in this biggest Spring Boot release since the 1.0 version. In this blog post I won’t go through every detail, but cover the most important things.\nNo more support for Java 7 and below Java 8 becomes a minimum requirement with Spring Boot. The release notes for the first milestone do not elaborate on that point further. Most developers will probably agree that this is for the better and if somehow you are still below Java 8 and planning to use the latest Spring Boot… Now you have a good business reason to upgrade.\nMultiple changes to property names and packages – this comes with support Spring Boot team decided to extensively refactor their package structures and properties. While the package name changes will require some refactoring to get your code working again, it may be a bit difficult to see how to translate all these properties. Don’t worry- Spring Boot has you covered with the spring-boot-properties-migrator module. This will basically print diagnostic information on the startup (to help you fix the config) and make the old config work temporarily. All you need to do to make use of it is to include the following dependency in your pom file:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-properties-migrator\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; This is a must if you are actually migrating to Spring Boot 2.0 from a lower version.\nSpring Boot Gradle plugin mostly rewritten This is only relevant to you if you were making use of this plugin before (or if you are a Gradle fan looking to start working with Spring Boot). The changes here are far too extensive to cover in a blog post, so I will refer you to the latest Snapshot of the Gradle plugin guide in pdf.\nMultiple changes to the Security module The main goal here was to simplify the default security configuration. From the version 2.0 as soon as you put Spring Security on the class path, the default security configuration will kick in. This time the Spring team decided for the most secure option- everything will be secured by default. If you added Actuator to your project, it will be secured as well. You may be affect by these changes if you use any of the following properties:\nsecurity.basic.authorize-mode security.basic.enabled security.basic.path security.basic.realm security.enable-csrf security.headers.cache security.headers.content-security-policy security.headers.content-security-policy-mode security.headers.content-type security.headers.frame security.headers.hsts security.headers.xss security.ignored security.require-ssl security.sessions Good news here is that there are couple great blog post on the subject written by Spring authors themselves: Security changes in Spring Boot 2.0 and Spring Boot Security 2.0. Check them out if you are using these modules.\nLarge changes to Spring Boot Actuator Spring Boot Actuator is probably one of the most popular Spring Boot modules. The fact that it is getting some major overhaul should be of interest to you! What changed about Actuator? Mostly everything! You can expect changes with programming model, configuration, security and response format of some endpoints.\nIf you are implementing your custom Actuator endpoints, you should read Migrating a custom Actuator endpoint to Spring Boot 2 written by the Spring Boot maintainers. If you are generally interested into how the new Actuator works- the new API Documentation is already online.\nIt seems that the main reason behind changes to the Actuator module was enabling its use outside of Spring Boot and Spring MVC. This technology agnostic approach will hopefully result in even more interest and support potentially from the outside of the Spring community.\nSpring Boot own metrics replaced with Micrometer support. You might have not heard of Micrometer (official website) before. The best way to explain what it is, would be to borrow from their own elevator pitch welcoming you to their website:\nMicrometer provides a simple facade over the instrumentation clients for the most popular monitoring systems, allowing you to instrument your JVM-based application code without vendor lock-in. Think SLF4J, but for metrics.\nI think this describes Micrometer perfectly! If you are using Actuator, then Micrometer would be already there. If you are planning on using Micrometer (and I think you definitely should!) there is a very good guide about integrating with Spring Boot 2 already available on the official website.\nSupport for the remote debugging over HTTP is removed To be honest this one worries me a bit. I always found it very useful to have that option available. I am sure that workarounds can be found (and with microservices being really small this is not as crucial as with monoliths). This was removed due to the issues with JDK 9, so I have some hope that it may be brought back.\nUpgrading libraries and Tomcat version across the board The key requirements for minimum versions of different libraries and tools are as follows:\nTomcat – 8.5 Hibernate – 5.2 (with some more minor changes to JPA) Gradle – 4.2 There are also myriad of smaller libraries upgrades that are present in most Spring Boot releases, so we won’t be going into details here.\nSummary Spring Boot 2.0 brings plenty to the table. The nature of the changes it more evolutionary than revolutionary. This is still the Spring Boot that so many Java developers (and other JVM citizens) grew to love. Changes that brought Micrometer and modernized Actuator are aimed at opening up the ecosystem and making use of the best solutions available anywhere (not only in the Spring world). With packages and properties renamed and re-design we gain more maturity and even better foundation on which we can build our Microservices. I am looking forward to the future Spring Boot release and progress that they will bring!\n","permalink":"https://e4developer.com/posts/what-you-need-to-know-about-spring-boot-2-0-rc1/","summary":"\u003cp\u003eWith Spring Boot 2.0 release just around the corner (at the time of writing we have RC1) it is important to see what changes it brings. Even if you are not planning to migrate shortly, it is good to see what is new in this biggest Spring Boot release since the 1.0 version. In this blog post I won’t go through every detail, but cover the most important things.\u003cimg loading=\"lazy\" src=\"/posts/what-you-need-to-know-about-spring-boot-2-0-rc1/images/spring-boot-2-1024x277.jpg\"\u003e\u003c/p\u003e\n\u003ch5 id=\"no-more-support-for-java-7-and-below\"\u003eNo more support for Java 7 and below\u003c/h5\u003e\n\u003cp\u003eJava 8 becomes a minimum requirement with Spring Boot. The \u003ca href=\"https://github.com/spring-projects/spring-boot/wiki/Spring-Boot-2.0.0-M1-Release-Notes\"\u003erelease notes for the first milestone\u003c/a\u003e do not elaborate on that point further. Most developers will probably agree that this is for the better and if somehow you are still below Java 8 and planning to use the latest Spring Boot… Now you have a good business reason to upgrade.\u003c/p\u003e","title":"What you need to know about Spring Boot 2.0 (RC1)"},{"content":"Message queues are very important and useful tools that you can utilize for your Microservices oriented architecture. Many developers are hesitant using them with the fear that they may add too much complexity and learning curve to the understanding of their system. I will show you how to make use of RabbitMQ and Spring Cloud Stream to get some basic messaging routes set-up with a very little effort!\nWhy use RabbitMQ RabbitMQ is an immensely popular message broker. In fact, the official website claims that this is the most popular open source message broker out there! This makes it a great candidate to be the message broker for your system. Popularity is not good enough reason for using something (but it usually brings plenty of benefits such as community and support), so rest assured- RabbitMQ has much more to offer than its fame. It is very easy to use (you will see) and it can reliably handle 20,000 messages per second with the largest documented deployment – the Instagram, doing more than 1,000,000 messages per second!\nWhy I didn’t choose Kafka for this blog post? Kafka is an amazing technology. It can handle truly large data. If you are doing more than 100,000 messages per second- go for Kafka! Then, your use case is probably so advanced that you may not need this blog post. If you don’t need so much of raw power and you deal with more standard microservices deployment- I believe you will like what RabbitMQ has to offer- and the simplicity of setup. Feel free to check Kafka afterward- nothing wrong with knowing your options!\nGetting RabbitMQ It is quite easy to set-up RabbitMQ on your machine. You can follow the official download and installation guide when dealing with serious deployment. Here I want to show you how to get RabbitMQ locally in a really easy way. If you do not already have a Docker installed- get it from the official website.If you are not sure why you would want a docker installed on your machine, read my blog post on the topic.\nAssuming you already have Docker on your machine, the official docker hub repository for RabbitMQ is here. It contains plenty of useful information about running and setting up RabbitMQ with Docker. For now you won’t be needing all that as we are starting with just a single command:\ndocker run -d --hostname my-rabbit --name some-rabbit -p 15672:15672 -p 5672:5672 rabbitmq:3-management\nThis will get the official image with management console added. It will also expose ports 15672 – for the management console and 5672 – for the connection to the RabbitMQ. That means you can inspect the management console going to http://localhost:15672, the default password and username being guest/guest. Once you go to that URL and you login, you should see:\nCongratulations! You have installed and are running RabbitMQ on your machine with Docker.\nUsing RabbitMQ with Spring Cloud Stream The example I will show you will be a Food Orders Processing Application. The idea is very simple: you have a service that you place a food order with and then you have one or more services that consumethat order:\nWe already have a working RabbitMQ and we won’t have to do anything special in order to use it. It is available on http://localhost:5672, accepting connections with username/password guest/guest. I assume here at least basic understanding of what Spring Boot is if you need a reminder- check this blog post. Let’s build the consumers service first:\nBuilding Food Order Consumer It may seem strange to build the consumer first, but there is logic to it. When we build consumer we will create durable Queues in RabbitMQ on their startup. We want that queue up-front so no messages get lost. The consumer is also a bit simpler. The only dependency you need for your Spring Boot project is spring-cloud-starter-stream-rabbit. With this dependency added, we can build our Consumer:\npackage com.e4developer.foodorderconsumer; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.cloud.stream.annotation.EnableBinding; import org.springframework.cloud.stream.annotation.StreamListener; import org.springframework.cloud.stream.messaging.Sink; @EnableBinding(Sink.class) @SpringBootApplication public class FoodOrderConsumerApplication { public static void main(String[] args) { SpringApplication.run(FoodOrderConsumerApplication.class, args); } @StreamListener(target = Sink.INPUT) public void processCheapMeals(String meal){ System.out.println(\u0026#34;This was a great meal!: \u0026#34;+meal); } } All we need to do here is to bind ourselves to the Sink.class. What is that? Well, this is the Spring Cloud Stream component that enables us to read messages. If we were going to also be sending messages then we would need Processor.class. So how does the Sink know where to read the message from? This is all in the properties file:\n#random for multiple instances server.port=0 spring.rabbitmq.host=localhost spring.rabbitmq.port=5672 spring.rabbitmq.username=guest spring.rabbitmq.password=guest spring.cloud.stream.bindings.input.destination=foodOrders spring.cloud.stream.bindings.input.group=foodOrdersIntakeGroup You can see that the RabbitMQ connection details are configured here. We also have server.port set to 0- this auto-assigns the value and makes it easy to start multiple servers. The first interesting property is: spring.cloud.stream.bindings.input.destination– this is the information from which topic (I am using this term loosely here- it is actually called Exchange in RabbitMQ) this messages should come from. The second property: spring.cloud.stream.bindings.input.group is the name of the input group– this is the name of the queue that will be created and subscribed to the exchangein order to get the messages. Without this set, each consumer will be started on its separate, unique queue, that by default is not durable. This is not usually what we want, although it may be useful for some sort of health-monitoring service.\nI will run couple of these Consumers now, if you want to follow you can clone my github project. Once these are started you should see the Exchange and IntakeGroup created automatically:\nI hope you can see the same (as you should with RabbitMQ running). Time to build the publisher and then we can see it all in action!\nBuilding Food Order Publisher Here, we want just two capabilities. Controller that can receive FoodOrder and some mechanism for publishing this onto a Queue. Let’s start with the basics. I will create a FoodOrder Class that we will expect to receive as a JSON from the controller:\npackage com.e4developer.foodorderpublisher; import com.fasterxml.jackson.annotation.JsonIgnoreProperties; @JsonIgnoreProperties(ignoreUnknown = true) public class FoodOrder { private String restaurant; private String customerAddress; private String orderDescription; public FoodOrder(){ } public FoodOrder(String restaurant, String customerAddress, String orderDescription) { this.restaurant = restaurant; this.customerAddress = customerAddress; this.orderDescription = orderDescription; } public void setRestaurant(String restaurant) { this.restaurant = restaurant; } public void setCustomerAddress(String customerAddress) { this.customerAddress = customerAddress; } public void setOrderDescription(String orderDescription) { this.orderDescription = orderDescription; } public String getRestaurant() { return restaurant; } public String getCustomerAddress() { return customerAddress; } public String getOrderDescription() { return orderDescription; } @Override public String toString() { return \u0026#34;FoodOrder{\u0026#34; + \u0026#34;restaurant=\u0026#39;\u0026#34; + restaurant + \u0026#39;\\\u0026#39;\u0026#39; + \u0026#34;, customerAddress=\u0026#39;\u0026#34; + customerAddress + \u0026#39;\\\u0026#39;\u0026#39; + \u0026#34;, orderDescription=\u0026#39;\u0026#34; + orderDescription + \u0026#39;\\\u0026#39;\u0026#39; + \u0026#39;}\u0026#39;; } } This is quite simple, providing an empty constructor and setters with getters to make an automatic construction with Jackson simple. I also added toString() method so that we can do some pretty-printing in the Controller. The next step is to define FoodOrderSource. This will simply be an interface that defines ways of obtaining the MessageChannel object needed to send the message. The code is as follows:\npackage com.e4developer.foodorderpublisher; import org.springframework.cloud.stream.annotation.Output; import org.springframework.messaging.MessageChannel; public interface FoodOrderSource { @Output(\u0026#34;foodOrdersChannel\u0026#34;) MessageChannel foodOrders(); } In order to use that in a controller we will need FoodOrderPublisher with a correct binding defined:\npackage com.e4developer.foodorderpublisher; import org.springframework.cloud.stream.annotation.EnableBinding; @EnableBinding(FoodOrderSource.class) public class FoodOrderPublisher { } With all that ready, we can define a simple controller that will make use of these classes and publish the message upon receiving the FoodOrder:\npackage com.e4developer.foodorderpublisher; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.integration.support.MessageBuilder; import org.springframework.web.bind.annotation.RequestBody; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.ResponseBody; import org.springframework.web.bind.annotation.RestController; @RestController public class FoodOrderController { @Autowired FoodOrderSource foodOrderSource; @RequestMapping(\u0026#34;/order\u0026#34;) @ResponseBody public String orderFood(@RequestBody FoodOrder foodOrder){ foodOrderSource.foodOrders().send(MessageBuilder.withPayload(foodOrder).build()); System.out.println(foodOrder.toString()); return \u0026#34;food ordered!\u0026#34;; } } This is all great, but how do these classes know what queue to talk to? Once again we can look at the .properties file in order to get the answer:\nserver.port=8080 spring.rabbitmq.host=localhost spring.rabbitmq.port=5672 spring.rabbitmq.username=guest spring.rabbitmq.password=guest spring.cloud.stream.bindings.foodOrdersChannel.destination=foodOrders spring.cloud.stream.default.contentType=application/json You can get all that code from my GitHub project. With this, all defined we will be ready to start this application and send some POST request with different food orders!\nSeeing it all in Action! To see it all working, let’s start by running Postman (or any other client that lets you send different post messages) and send some FoodOrders !\nAs you can see these are being consumed by the FoodOrderPublisher and published on the queue via the Exchange:\nMessages are also being uniquely processed by different consumers- the same message does not get processed twice!\nSummary What you have seen here is the basics behind Spring Cloud Stream integration with RabbitMQ. After following the steps here and understanding what is happening, you should have no problems getting this up and running on your machine. This is not a production ready code, as when dealing with asynchronous processing you need to worry about a few extra things- what happens when your messages are broken (or your consumer code is broken), how do you handle all the errors and monitoring etc. This is all important, but you need to start somewhere! Being able to easily run this on your machine should give you this place to start and confidence to experiment and go further!\n","permalink":"https://e4developer.com/posts/setting-up-rabbitmq-with-spring-cloud-stream/","summary":"\u003cp\u003eMessage queues are very important and useful tools that you can utilize for your Microservices oriented architecture. Many developers are hesitant using them with the fear that they may add too much complexity and learning curve to the understanding of their system. I will show you how to make use of RabbitMQ and Spring Cloud Stream to get some basic messaging routes set-up with a very little effort!\u003c/p\u003e\n\u003cp\u003e\u003cimg loading=\"lazy\" src=\"/posts/setting-up-rabbitmq-with-spring-cloud-stream/images/rabbit-spring-cloud-1024x254.jpg\"\u003e\u003c/p\u003e\n\u003ch3 id=\"why-use-rabbitmq\"\u003eWhy use RabbitMQ\u003c/h3\u003e\n\u003cp\u003eRabbitMQ is an immensely popular message broker. In fact, \u003ca href=\"https://www.rabbitmq.com/\"\u003ethe official website\u003c/a\u003e claims that this is the most popular open source message broker out there! This makes it a great candidate to be the message broker for your system. Popularity is not good enough reason for using something (but it usually brings plenty of benefits such as community and support), so rest assured- RabbitMQ has much more to offer than its fame. It is very easy to use (you will see) and it can reliably handle 20,000 messages per second with the largest documented deployment – the Instagram, doing more than 1,000,000 messages per second!\u003c/p\u003e","title":"Setting up RabbitMQ with Spring Cloud Stream"},{"content":"Microservices are gaining popularity and more developers end up working with them. If you are a developer who is going to work with microservices architecture, or an employer who is looking to hire someone- what are the most important skills for microservices developer to posses? Read on to find out.\nAs with any emerging technologies and trends, there is some learning to be done to master it. It is the nature of our jobs as developers- to stay up to date with the latest and greatest patterns and architectures. So, what can you start doing now to get good at these microservices? Who should you look for to join your microservices oriented team? Here I gathered seven essential skills that will help any developer feel at home with microservices:\nArchitecture Knowledge It is essential to familiarize yourself with common microservices pattern. I recommend checking my Spring Cloud Introduction as just by reading about what Spring Clouds has to offer and learning its modules, you are going to have much better understanding of how things should be structured. If you don’t know about standard patterns, you will attempt to solve problems already solved and you will be unlikely to chose the best solution.\nI also highly recommend grabbing a copy of Building Microservices by Sam Newman which I reviewed just before writing this blog post. By reading this you will be sure to be aware of the pattern and best practices, albeit in a framework-free way.\nThe combination of knowing one Microservices framework like Spring Cloud and good book knowledge from Building Microservices will set you for a great start in the Microservices world.\nDomain Modelling Even if you understand your architecture and patterns perfectly it is still not that easy to be very successful with Microservices. Splitting responsibilities between different parts of the system can get very difficult very quickly. You need to be good at domain modelling and understanding how to assign responsibility. One trick that I can recommend is drawing more. Drawing with your team and other people involved in the project is a great tool for fostering shared understanding of the domain.\nBeing good at linking your domain and design is a universally useful skill for a software developer. If you want to get much deeper in this subject, you may want to check out Domain Driven Design. You can find a great explanation of what DDD really is in thisStack Overflow Answer (of all the places!). There are many books and articles on that subject that you can check out as well!\nDevops and Containers The idea behind successful microservices is to work in DevOps way. For the purpose of this article that means taking the ownership of the service all the way from writing the code for deploying in production. Even if you are not going to be the one deploying it, you should have some idea how this deployment will look like. There is no hiding, you will have to become somewhat familiar with Containers, Docker, Kubernetes etc. The good news is that you can get Docker on your machine and it is a very useful tool!\nSo beyond containers, what will you have to know? Queues, messaging, databases, some cloud (AWS, Azure)… Wow, it seems like a lot! Don’t worry, if you are working in a DevOps team, there are likely to be some experienced colleagues there that can assist you. No one becomes expert overnight, but learning some of these technologies may be new to you if you were not exposed to the operation side of things. The good news is- these can be fun, challenging and useful!\nIf there is just one book I would recommend you to check out to be more confident about your knowledge of DevOps mindset and skill set it would be DevOps Handbook based on the very entertaining Phoenix Project.\nSecurity As you may imagine, securing many things is more difficult than securing a single thing. With microservices security concerns are much more at the front of everybody’s mind than they were when everyone was working with monoliths. What specific security things should you learn? I really recommend looking at common Single Sign-On (SSO) implementations, especially at the OAuth2 related tech. Spring Cloud Security specifically can teach you some best practices and give you good ideas about implementing secure microservices.\nWhat other security concerns are there when dealing with distributed architecture? Securing data at rest, securing configuration- microservices have their own configs and data. These are places where security can often be compromised.\nTo quote one rule from Sam Newman’s book that I think is extremely important:\nDon’t invent your own security protocols.\nFollow the best practices from established frameworks and you should be fine!\nTesting One thing that can be extremely deficient in your productivity and success is having services that constantly fail and don’t fulfil their contracts. I have noticed, that because microservices are smaller and look less serious or businesslike than large monolithic applications, some developers neglect the testing.\nPlease, don’t do it! Microservices offer ample opportunities for creating well tested and robust solution, so don’t pass on it, just because it is not trivial or unit tests don’t seem to be the answer. So what kind of tests should you be thinking about? Well here is my proposition:\nUnit tests (yes, these are still important!) Service / API test – you want to test your endpoints as thoroughly as possible End to End test of the integrated system (continuous integration can be a great help here) Continuous Integration As I mentioned in the point before, an end to end testing is important. With a microservices system, to be sure that everything is working correctly, you should see the microservices interact together correctly. You can only go so far when testing such complex systems in isolation. The best way to deal with this problem is to set-up continuous integration. Whenever you merge your code to master (and I hope you are doing pull requests!) you should be automatically deploying to an integration environment where different tests are run (and teammates can engage in some exploratory testing).\nSo, no more excuses and manual deployment, this is the time to setup and learn Jenkins/TeamCity or whatever tool for CI you use and make the most of it! This point is partially an extension of the DevOps skill, but important enough to merit its own mention!\nTeamwork Last but not least, you really need to embrace teamwork and learn about working with people. When writing a small part of a very large system it is easier to isolate yourself and live in your own bubble. No such chance here! Multiple integrating systems and DevOps culture mean that you will have to interact with people a lot!\nThere is also interesting insight into system design called “Conway’s Law“:\nAny organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.\nThis emphasizes that great cooperation across the board is a prerequisite for great microservices. How can you have good communication between microservices when it is lacking between people?\nSummary Microservices as a trend bring a lot of change to the daily lives of software developers! Skills that were considered niche/special until recent, became an expectation and a requirement. There is a new way of working- DevOps; that is likely to cause an even greater shakeup in the way we see developers job than Agile did! I think this is the time of great opportunity for all of us, members of the software development community to learn much more and be more effective than ever! Don’t be afraid of this change- be excited!\n","permalink":"https://e4developer.com/posts/seven-essential-skills-for-microservices-developers/","summary":"\u003cp\u003eMicroservices are gaining popularity and more developers end up working with them. If you are a developer who is going to work with microservices architecture, or an employer who is looking to hire someone- what are the most important skills for microservices developer to posses? Read on to find out.\u003c/p\u003e\n\u003cp\u003e\u003cimg loading=\"lazy\" src=\"/posts/seven-essential-skills-for-microservices-developers/images/mag-seven.jpg\"\u003e\u003c/p\u003e\n\u003cp\u003eAs with any emerging technologies and trends, there is some learning to be done to master it. It is the nature of our jobs as developers- to stay up to date with the latest and greatest patterns and architectures. So, what can you start doing now to get good at these microservices? Who should you look for to join your microservices oriented team? Here I gathered seven essential skills that will help any developer feel at home with microservices:\u003c/p\u003e","title":"Seven Essential Skills for Microservices Developers"},{"content":"A lot of people want to start working with Microservices and don’t quite know where to start. I remember being there- finding that my next project is going to use microservices architecture and I should get familiar with it. Of course, I heard about microservices before and I have read some blog posts, but I felt that my knowledge had major gaps. If you are in this situation- worry no more! Just get yourself a copy of “Building Microservices” by Sam Newman and read it! Continue to find out more why I think this book has you covered.\nCriteria for a good Microservices Introduction book The first book you read about microservices should be language agnostic. I don’t recommend picking up something that tells you how to get your NodeJS microservices to rule the world or how Kafka Streaming will forever change the way you build your choreography. You will end up spending too much time focusing on technical details and not enough appreciating the intent behind the patterns and solutions.\nCriteria One – Language Agnostic Another thing that is a must is a broad coverage. It can be very dangerous to go into microservices with a limited idea of what is out there. As they say- if all you have is a hammer, everything looks like a nail… If all you have is a REST API then everything looks like a case for orchestration.\nCriteria Two – The book should provide broad coverage of microservices topics The final thing that would be good if the book was not too wordy. If this is your starting book, you don’t need amazingly deep insight into these patterns- that will come as you start using them and seeing them in the real world. You want a clear and concise explanation that will motivate you to learn more.\nCriteria Three – The book should be concise and easy to read Building Microservices by Sam Newman – Review The first thing you notice about the book is how small it is. Given that it is only 280 pages and of a rather small format makes it quite small. Don’t be fooled by that, the book is packed full of information. This is the main impression that I was left after reading it- a marvel of how much is out there to learn and understand. If you are relatively new to microservices, this is an absolute treasure trove of knowledge and best practices.\nSam Newman does not focus on specific languages. What he does though is he takes a lot of examples from projects that he has worked on which makes this book ring very true for experienced developers. This approach makes the book relevant to developers coming from multiple backgrounds. Examples and real experience from the author give credibility to his opinions and advice.\nThe thing that I am most impressed with is the breath that is being covered here. Just by looking at the Table of Contents we can see Chapters such as:\nWhat About Service-Oriented Architecture? No Silver Bullet Building a Team How to Model Services Implementing Asynchronous Event-Based Collaboration Authentication and Authorization Implementing Service Tests Many many more… This is nowhere near the full scope of the book (that you can check out on the official website). The scope is very impressive and the fact that it is all packaged within the 280 pages leaves no space for waffle.\nMy favorite chapter was the one of Integration where we have a lot of crucial patterns discussed in an informed way. Too often you hear people saying- always use choreography (services reading and publishing to queues) instead of orchestration (services calling each other). There is a preference for one over the other, but real life rarely is that simple. This chapter makes strong cases for best practices (choreography) while explaining other approaches and how to make them work. There are some opinions on code reuse that I have wrote my own take on (as in the blog post I published on Scott Logic website)- code reuse is still a hot topic when discussing microservices with others. Reading multiple sources and your own experience is what gives the best understanding of the issues. Building Microservicesgives you a good starting point in nearly all microservice related topics!\nOne thing to be aware of is that this is an introduction book. It will make you aware of the breadth of the topic and teach you a lot, but it will not make you an expert in microservices design and architecture. I believe this misunderstanding made some people rate it a bit lower on Amazon. I don’t think it is reasonable to ask anyone to make you an expert in microservices in 280 pages, so I don’t see it as a drawback.\nOverall, this is the best book that I know of for someone who wants to get started in microservices. It fulfills my criteria for a good microservices introduction and I highly recommend it.\n","permalink":"https://e4developer.com/posts/starting-with-microservices-read-building-microservices/","summary":"\u003cp\u003eA lot of people want to start working with Microservices and don’t quite know where to start. I remember being there- finding that my next project is going to use microservices architecture and I should get familiar with it. Of course, I heard about microservices before and I have read some blog posts, but I felt that my knowledge had major gaps. If you are in this situation- worry no more! Just get yourself a copy of \u003cem\u003e“Building Microservices”\u003c/em\u003e by Sam Newman and read it! Continue to find out more why I think this book has you covered.\u003c/p\u003e","title":"Starting with Microservices: Read “Building Microservices”"},{"content":"If you are interested in building Microservices in the JVM ecosystem- you have to check out Spring Cloud. Spring Cloud is a project which goal is to make Microservices architecture and patterns simple and practical to use. Spring Boot provides opinionated way of making a Microservice, Spring Cloud gives you an opinionated framework for getting your architecture around them.\nThe Scope of Spring Cloud Spring Cloud is a large project that is difficult to talk about as a whole. It is easier to look at individual components and see what they bring to the table. One great thing about Spring Cloud is that you are free to mix and match the components. You may for example want to use Spring Cloud Config server and nothing else. You can even use that with non-Java based microservices. Here are the parts of Spring Cloud with brief descriptions of what they do:\n**Spring Cloud Config –**Git or local file based configuration server. It provides support for encryption, refreshing of configuration and seamless integration with any Spring Boot based application. JSON and other endpoints are supported. **Spring Cloud Netflix –**Battle tested Netflix components including Service Discovery (Eureka), Circuit Breaker (Hystrix), Intelligent Routing (Zuul) and Client Side Load Balancing (Ribbon). **Spring Cloud Consul –**Consul integration for Spring Boot apps. It provides Service Discovery, Distributed Configuration and Control Bus. **Spring Cloud Security –**OAuth2 security for Spring Boot, providing single sign on, token relay and token exchange. It also enables declarative model which can be implemented to secure your services in more fine-grained fashion. **Spring Cloud Sleuth –**Distributed tracing, latency, logs. It borrows heavily from Dapper, Zipkin and HTrace. Your go-to tool for debugging and investigating performance of your services. **Spring Cloud Stream –**Framework for building message driven microservices. If you heard about orchestration, this is the place to start. It enables you to seamlessly integrate Kafka, RabbitMQ and other message brokers into your system. **Spring Cloud Task –**Enables development and running of short lived microservices. This is how you can implement the ‘serverless computing’ model with Spring Cloud. Think AWS lambdas. Spring Cloud Dataflow – Toolkit for data integration and real-time data processing pipelines. It can work closely with Spring Cloud Stream or Spring Cloud Task. **Spring Cloud Zookeeper –**Apache Zookeeper integration for Spring Boot apps. It helps with problems like Service Discovery and Distributed Configuration. **Spring Cloud for AWS –**Makes integration with Amazon Web Services easier. The idea is to build the application around the hosted services without having to care about infrastructure or maintenance. It connects Messaging and Caching Spring APIs with AWS. **Spring Cloud Spinnaker –**It makes deploying Spinnaker easily. This is for Multi-Cloud configuration mainly. **Spring Cloud Contract –**This is umbrella project that helps implementing Consumer Driven Contracts in Spring Cloud. The idea of a microservice architecture blueprint I find the Spring Cloud proposition very tempting. What I think is still lacking in microservices environment is set of standard best practices, tools and frameworks that are guaranteed to work together and are maintained together. The sort of microservices blueprint for success. Well, I should not say that this is lacking, as I believe Spring Cloud is that blueprint for success in microservices. Of course, I am not being naive here- there are multiple prerequisites for microservices pattern to succeed, but at least with Spring Cloud, you have the technical choices nailed down. To drive the point home, the big advantages for going with Spring Cloud are:\nProven solutions– large parts of the framework come from places like Netflix where they were pushed to their limits and battle tested They are tested and work well together- Going the Spring Cloud route, you are going the standard route Good community behind these projects– you can get help and find plenty of resources explaining how to do things Spring Boot is already a massive success– these technologies are going to work well with Spring Boot Scope and quality of the components– you are well covered here with most microservices problems solved in beautiful ways Opinionated framework– there are conscious design decisions made by the framework that steer you in the right direction With all these benefits I believe that Spring Cloud will do for microservices architecture what Spring Boot did for the microservice itself. Standardize it, make it better and more pleasant to work with than ever… Also it will become more and more popular.\nSo why is not everyone using it already? If Spring Cloud is so amazing, why is not everyone using it already? I think it boils down to a few reasons:\nMany places that claim ‘microservices’ usage are not really using microservices. They are running what I would call decomposed monoliths making a lot of these capabilities unnecessary. Developers are often not empowered to make architectural decisions. Even though Spring Cloud is popular, people making decisions on what to use may still have biases towards more ‘enterprise’ offerings. Spring Cloud can be mixed and matched with other technologies. The fact that the whole project is not using Spring Cloud, does not mean that there is no trace of it there. It is still relatively new (2015) and more people are adopting it every year. Summary Spring Cloud is a vast and impressive project. If you are interested in microservices development, you should be aware of what it has to offer. With a very active community and maintainers, it will only become more useful and popular. See what Spring Cloud has to offer next time you are faced with a microservices dilemma.\n","permalink":"https://e4developer.com/posts/spring-cloud-blueprint-for-successful-microservices/","summary":"\u003cp\u003eIf you are interested in building Microservices in the JVM ecosystem- you have to check out Spring Cloud. Spring Cloud is a project which goal is to make Microservices architecture and patterns simple and practical to use. Spring Boot provides opinionated way of making a Microservice, Spring Cloud gives you an opinionated framework for getting your architecture around them.\u003c/p\u003e\n\u003cp\u003e\u003cimg loading=\"lazy\" src=\"/posts/spring-cloud-blueprint-for-successful-microservices/images/spring-cloud-logo.png\"\u003e\u003c/p\u003e\n\u003ch3 id=\"the-scope-of-spring-cloud\"\u003eThe Scope of Spring Cloud\u003c/h3\u003e\n\u003cp\u003eSpring Cloud is a large project that is difficult to talk about as a whole. It is easier to look at individual components and see what they bring to the table. One great thing about Spring Cloud is that you are free to mix and match the components. You may for example want to use Spring Cloud Config server and nothing else. You can even use that with non-Java based microservices. Here are the parts of Spring Cloud with brief descriptions of what they do:\u003c/p\u003e","title":"Spring Cloud - Blueprint for Successful Microservices"},{"content":"It seems that just recently majority of server-side development was done with some flavor of Enterprise Java. Who can forget J2EE, or JEE, writing all these JSP, JSF and Struts applications. It also feels that over the past few years there is less and less happening in that space. New projects are regularly picking alternative technologies and release and the release of JEE 8 did not have the same impact as Spring Boot or Microservices in general. Microprofile is an attempt to change that. Microprofile is the Enterprise Java answer to Microservices.\nWhat exactly is Microprofile? I will start by quoting the Microprofile website (http://microprofile.io) directly:\nThe MicroProfile is a baseline platform definition that optimizes Enterprise Java for a microservices architecture and delivers application portability across multiple MicroProfile runtimes.\nThat roughly translates to: “Microprofile is a Java Enterprise subset suitable for building microservices.” Sounds great- plenty of Java developers have vast Java Enterprise experience that could be reused in more microservices-oriented framework. The official website elaborates by adding that the initial baseline is JAX-RS + CDI + JSON-P. As a fan of CDI beans that sounds like a good start to me. This is also not meant to be the whole extent of Microprofile. Because it is not Oracle controlled, it is not strictly limited to using technologies from the Java Enterprise portfolio. The idea here is to combine them with other open-source projects.\nThis leads to another important point about Microprofile. It is controlled by Eclipse foundation- that means community control and innovation. Microprofile is free to innovate and experiment much faster than Java Enterprise ever was with JCP and JSR process. In my opinion, this is good, as we will have the best of both worlds. Very solid foundation in terms of proven Java EE technologies, coupled with heavy community involvement and freedom to move forward faster.\nCan you already use Microprofile? The answer is yes! Microprofile version 1.3 was released in January 2018. Eclipse published even some samples on their Github profile. Here I would like to take a closer look at their microprofile-sample-cannonical which is a simple sample of a Rest microservice.\nSo the main application class goes as follows:\n/* * Copyright (C) 2016, 2017 Antonio Goncalves and others. * * Licensed under the Apache License, Version 2.0 (the \u0026#34;License\u0026#34;); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an \u0026#34;AS IS\u0026#34; BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or * implied. * * See the License for the specific language governing permissions and * limitations under the License. */ package org.eclipse.microprofile.sample.canonical.rest; import javax.enterprise.context.ApplicationScoped; import javax.ws.rs.ApplicationPath; import javax.ws.rs.core.Application; @ApplicationPath(\u0026#34;/\u0026#34;) @ApplicationScoped public class RestApplication extends Application { } and we have the Rest controller:\n/* * Copyright (C) 2016, 2017 Antonio Goncalves and others. * * Licensed under the Apache License, Version 2.0 (the \u0026#34;License\u0026#34;); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an \u0026#34;AS IS\u0026#34; BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or * implied. * * See the License for the specific language governing permissions and * limitations under the License. */ package org.eclipse.microprofile.sample.canonical.rest; import javax.enterprise.context.RequestScoped; import javax.inject.Inject; import javax.json.Json; import javax.json.JsonArrayBuilder; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.core.MediaType; import org.eclipse.microprofile.sample.canonical.utils.QLogger; import java.util.ArrayList; import java.util.List; import java.util.Random; import java.util.logging.Logger; @Path(\u0026#34;/\u0026#34;) @RequestScoped public class TopCDsEndpoint { @Inject @QLogger private Logger logger; @GET @Produces(MediaType.APPLICATION_JSON) public String getTopCDs() { final JsonArrayBuilder array = Json.createArrayBuilder(); final List\u0026amp;lt;Integer\u0026amp;gt; randomCDs = getRandomNumbers(); for (final Integer randomCD : randomCDs) { array.add(Json.createObjectBuilder().add(\u0026#34;id\u0026#34;, randomCD)); } return array.build().toString(); } private List\u0026amp;lt;Integer\u0026amp;gt; getRandomNumbers() { final List\u0026amp;lt;Integer\u0026amp;gt; randomCDs = new ArrayList\u0026amp;lt;\u0026amp;gt;(); final Random r = new Random(); randomCDs.add(r.nextInt(100) + 1101); randomCDs.add(r.nextInt(100) + 1101); randomCDs.add(r.nextInt(100) + 1101); randomCDs.add(r.nextInt(100) + 1101); randomCDs.add(r.nextInt(100) + 1101); logger.info(\u0026#34;Top CDs are \u0026#34; + randomCDs); return randomCDs; } } So far this looks great. You can see that there is no much code needed here and the whole application resembles Spring Boot or Dropwizard. Now let’s look at the dependencies…\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;project xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\u0026#34; xmlns=\u0026#34;http://maven.apache.org/POM/4.0.0\u0026#34;\u0026gt; \u0026lt;modelVersion\u0026gt;4.0.0\u0026lt;/modelVersion\u0026gt; \u0026lt;parent\u0026gt; \u0026lt;groupId\u0026gt;org.eclipse.microprofile.sample\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;parent\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.0.0-SNAPSHOT\u0026lt;/version\u0026gt; \u0026lt;/parent\u0026gt; \u0026lt;artifactId\u0026gt;canonical\u0026lt;/artifactId\u0026gt; \u0026lt;packaging\u0026gt;${packaging.type}\u0026lt;/packaging\u0026gt; \u0026lt;name\u0026gt;Microprofile Samples :: Canonical\u0026lt;/name\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.jboss.logging\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;jboss-logging\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;javax\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;javaee-api\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;net.javacrumbs.json-unit\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;json-unit-fluent\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;com.fasterxml.jackson.core\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;jackson-databind\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.jboss.resteasy\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;resteasy-client\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.arquillian.universe\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;arquillian-junit\u0026lt;/artifactId\u0026gt; \u0026lt;type\u0026gt;pom\u0026lt;/type\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;junit\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;junit\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;build\u0026gt; \u0026lt;finalName\u0026gt;microprofile-sample-canonical\u0026lt;/finalName\u0026gt; \u0026lt;/build\u0026gt; \u0026lt;profiles\u0026gt; \u0026lt;profile\u0026gt; \u0026lt;id\u0026gt;wildfly-swarm\u0026lt;/id\u0026gt; \u0026lt;build\u0026gt; \u0026lt;plugins\u0026gt; \u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.wildfly.swarm\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;wildfly-swarm-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;/plugin\u0026gt; \u0026lt;/plugins\u0026gt; \u0026lt;/build\u0026gt; \u0026lt;/profile\u0026gt; \u0026lt;profile\u0026gt; \u0026lt;id\u0026gt;hammock\u0026lt;/id\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;ws.ament.hammock\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;dist-microprofile\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;ws.ament.hammock\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;test-arquillian\u0026lt;/artifactId\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.jboss.arquillian.container\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;arquillian-weld-embedded\u0026lt;/artifactId\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;/profile\u0026gt; \u0026lt;/profiles\u0026gt; \u0026lt;/project\u0026gt; Being exposed to Spring Boot, the number of dependencies here seems quite large in comparison. This is not necessarily bad, but opinionated frameworks with their ease of development are gaining more and more traction. Perhaps Microprofile in the future could provide a bit more guidance in which libraries are recommended or necessary to get some basics running.\nOne interesting thing is the use of WildFly Swarm. With this, we gain something similar to embedded Tomcat that is found in Spring Boot. It is added here as a Maven plugin- closer and more seamless integration would be most welcome!\nWhat is the future of Microprofile? At the time of writing, Microprofile just released version 1.3. Currently, that includes:\nJAX-RS 2.0 CDI 1.2 JSON-P 1.0 Config 1.1 (slight update to Config 1.0) Fault Tolerance 1.0 JWT Propagation 1.0 Health Metrics 1.0 Health Check 1.0 MicroProfile OpenTracing 1.0 MicroProfile OpenAPI 1.0 MicroProfile Type-safe Rest Client 1.0 MicroProfile Metrics 1.1 MicroProfile Config 1.2 Given that the version 1.0 was released on 17th September 2016 (announcement here) this is very impressive list of features. The features that the team will focus on now are:\nJSON-B 1.0 CDI 2.0 JSON-P 1.1 JAX-RS 2.1 Improving development documentation There is a lot of exciting stuff and innovation here. Especially with JSON-B (that you should read about, as it may change the way we write Java to Json: http://json-b.net/) and with CDI 2.0 as they bring plenty of new features to already very good CDI Beans.\nMy final thoughts It feels that Microprofile will bring a much-needed breath of fresh air onto Java Enterprise development. At the moment the team is moving at a very high speed and there are ample opportunities to cooperate with them and influence, just check on: https://microprofile.io. One thing I really would like to see is a closer integration with Application Server, as this was one of the features that made Spring Boot and Dropwizard so popular. WildFly Swarm is great, but not sure how easily you can configure it with Microprofile itself. Maybe there is a chance for better integration than Maven plugin?\nThe future of Enterprise Java is looking exciting once again!\n","permalink":"https://e4developer.com/posts/java-enterprise-and-microservices-meet-microprofile/","summary":"\u003cp\u003eIt seems that just recently majority of server-side development was done  with some flavor of Enterprise Java. Who can forget J2EE, or JEE, writing all these JSP, JSF and Struts applications. It also feels that over the past few years there is less and less happening in that space. New projects are regularly picking alternative technologies and release and  the release of JEE 8 did not have the same impact as Spring Boot or Microservices in general. Microprofile is an attempt to change that. Microprofile is the Enterprise Java answer to Microservices.\u003cimg loading=\"lazy\" src=\"/posts/java-enterprise-and-microservices-meet-microprofile/images/microprofile.png?resize=714%2C252\u0026ssl=1\"\u003e\u003c/p\u003e","title":"Java Enterprise and Microservices - meet Microprofile!"},{"content":"If you want to get into microservices development, you will want to run multiple things on your machine. Having services, databases, message brokers etc. all working on your machine without conflicts may be very difficult. Docker solves this problem beautifully.\nDocker and Containers So, what is docker and why is it such a big deal? Docker enables you to run different software on your own machine… But wait- can’t you already do that? Yes, you can, but not quite the same way like you do with Docker.\nYou might have come across Virtual Machines, the idea of having another operating system executed on your machine that is completely separated from your one. Containers are very similar, and for most cases better! You have similar level of separation (we know how hard is to delete stuff or deal with ports clashing etc.), but the operating system layer is not replicated as a whole for every container (as it is for a virtual machine). Have a look at the diagram that hopefully makes this clear:\nOnce you get Docker installed, you will be able to run different software on your machine with a very low overhead. So, why wait? Get Docker now!\nGetting Docker So, how do you get Docker? Are there any major prerequisites? These days Docker will run fine on Windows, Mac and Linux. Installation notes may be a bit different, but you can find mostly all you need on: https://www.docker.com/community-edition. Community Edition of Docker will serve you just fine for your development needs. There are Enterprise versions available, but these are much more expensive and not necessary for your local development.\nHow can docker help with your development This is the fun part! I am assuming that you have your Docker installed (it does not matter on which operating system, the following will work anyway!). Imagine you want to run MongoDB on your machine. You no longer have to install it yourself. You can get it from https://hub.docker.com/_/mongo/ – this is the official image repository for MongoDB. You can follow the instruction provided there, which boil down to running:\n$ docker run --name some-mongo -d mongo\nThis will download the docker image of MongoDB to your machine and automatically expose port 27017 for you to connect. To do just that you can see from the documentation that what you need is:\n$ docker run -it --link some-mongo:mongo --rm mongo sh -c \u0026quot;exec mongo $MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test\u0026quot;\nDon’t worry if this looks too arcane! Once you start using Docker a bit it will make much more sense.\nIf you want to connect application and use this as your MongoDB- no problem! However, I would recommend reading the documentation first on any configuration you may need. So running Mongo is nice, but what else can it do?\nKafka – https://hub.docker.com/r/wurstmeister/kafka/ Distribution of Ubuntu – https://hub.docker.com/_/ubuntu/ Jenkins – https://hub.docker.com/_/jenkins/ PostgreSQL – https://hub.docker.com/_/postgres/ Apache Flink – https://hub.docker.com/_/flink/ Pretty much any technology you ever wanted to try Run your microservices Whatever you want, as you can create your own docker images! Docker and Microservices – the big picture You have seen that Docker is incredibly useful as a development tool, but that’s not all! Docker is also great for deploying your application. One of the most Docker friendly clouds I came across is Digital Ocean. This space changes rapidly so if you are interested in deploying your Docker containers, do some googling around and see which company has the best offering. You can use AWS and Azure without any issue as well.\nIn reality, for a production system you probably don’t want naked Docker containers. For a real microservices deployment you may need replication and easy scaling of your containers. At the time of writing I am aware of two mainstream solutions to this problem:\nKubernetes – absolutely amazing system for containers orchestration, born out of Google’s Borg project Docker Swarm – Docker native answer to the orchestration problem, bit less mature than Kubernetes There is much more to Docker and containers. Docker is not an open-source project, but there is a large effort in the open source community based on Docker. Project Moby and Containerd are your go-to open source ideas if you are interested in this space. These may become the go-to containers solution in the future…\nThis is a very active space at the moment, so I recommend learning it to some depth and keeping an eye open for any changes. If you want to be a microservices developer (or maybe any server side developer soon) you will have to get familiar with these concepts and technologies.\nSummary Docker is a great tool to have on your development workstation. It enables you to easily try and test technologies and solutions that may have been difficult to handle in the past. Beyond that, Docker and Containers are core things to understand and use when dealing with microservices. Tools such as Kubernetes and Docker Swarm are becoming common place. If you want to be involved in modern development, you need to familiarize yourself with these concepts and technologies. The best way to learn is to try so enjoy playing with Docker and containers!\n","permalink":"https://e4developer.com/posts/microservices-toolbox-docker/","summary":"\u003cp\u003eIf you want to get into microservices development, you will want to run multiple things on your machine. Having services, databases, message brokers etc. all working on your machine without conflicts may be very difficult. Docker solves this problem beautifully.\u003c/p\u003e\n\u003cp\u003e\u003cimg loading=\"lazy\" src=\"/posts/microservices-toolbox-docker/images/docker.png\"\u003e\u003c/p\u003e\n\u003ch3 id=\"docker-and-containers\"\u003eDocker and Containers\u003c/h3\u003e\n\u003cp\u003eSo, what is docker and why is it such a big deal? Docker enables you to run different software on your own machine… But wait- can’t you already do that? Yes, you can, but not quite the same way like you do with Docker.\u003c/p\u003e","title":"Microservices Toolbox - Docker"},{"content":"This is beginning of the series of blog posts where I will introduce and explain different tools and frameworks that are useful in microservices development. It is difficult to start such a series without introducing Spring Boot!\nMeet Spring Boot- framework which released its 1.0 version in 2014 and by now it is nearly synonymous with microservices in the Java world. Just look at these google trends statistics! Dropwizard (one of the initial competitors) and even the general microservices term are far less popular:\nWith its undeniable popularity, this is the one framework that you absolutely have to be aware of when talking about the microservices in the JVM ecosystem.\nSpring Boot – The Basics Spring Boot is effectively very light weight version of Spring that you can run as an executable .jar as it bundles its own Tomcat runtime. It also heavily favors convention over configuration. In the words of the library maintainers:\n(Spring Boot) Takes an opinionated view of building production-ready Spring applications. Spring Boot favors convention over configuration and is designed to get you up and running as quickly as possible.\nWhat does that mean for the users? It means that writing a very basic Spring Boot application- the Hello World; is incredibly simple. Your whole application can consist of just a few files. All you need is a .pom file:\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;project xmlns=\u0026#34;http://maven.apache.org/POM/4.0.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\u0026#34;\u0026gt; \u0026lt;modelVersion\u0026gt;4.0.0\u0026lt;/modelVersion\u0026gt; \u0026lt;groupId\u0026gt;com.e4developer\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-hello-world\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;0.0.1-SNAPSHOT\u0026lt;/version\u0026gt; \u0026lt;packaging\u0026gt;jar\u0026lt;/packaging\u0026gt; \u0026lt;name\u0026gt;spring-boot-hello-world\u0026lt;/name\u0026gt; \u0026lt;description\u0026gt;Demo project for Spring Boot\u0026lt;/description\u0026gt; \u0026lt;parent\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-parent\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.5.9.RELEASE\u0026lt;/version\u0026gt; \u0026lt;relativePath/\u0026gt; \u0026lt;/parent\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-web\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-test\u0026lt;/artifactId\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;/project\u0026gt; And a very simple Java class:\npackage com.e4developer; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.ResponseBody; @Controller @SpringBootApplication public class SpringBootHelloWorldApplication { @RequestMapping(\u0026#34;/\u0026#34;) @ResponseBody String hello() { return \u0026#34;Hello World from e4developer!\u0026#34;; } public static void main(String[] args) { SpringApplication.run(SpringBootHelloWorldApplication.class, args); } } If you want you can check out the code from my github repository and run it yourself you can find it here: https://github.com/bjedrzejewski/spring-boot-hello-world You can run it by building in your favorite IDE and either running from the IDE or with the command: mvn spring-boot:run\nSpring Boot – Rapid High Level Overview So what features do we have in Spring Boot and why is it so popular? Let’s go through the items one by one:\nEmbedded Tomcat – you don’t have to worry about application server, Spring Boot by default includes Tomcat in its jar. You can use different embedded server, such as Jetty for example. Autoconfiguration – this is a very opinionated framework, if you are happy with the default setups, Spring Boot mostly configures itself. It dramatically cuts down on the boiler plate. Basically, if you include Spring Boot Started dependency, like for example mongodb- just add spring-boot-starter-data-mongodb to your dependencies. I wrote another blog post on that topic if you want to learn more: http://blog.scottlogic.com/2016/11/22/spring-boot-and-mongodb.html . This is one of the core concepts and ideas behind Spring Boot Initializr – getting started is super simple! Just visit https://start.spring.io/ and pick what you need. IntelliJ includes Initializr which makes it even more convenients. This is how I created this hello-world-project mentioned before Actuator – Health checks and other monitoring utilities for building production quality microservices Property files with yml support – The default Spring Boot configuration is an empty application.properties file, which is a nice contrast to most of the frameworks our there Spring Boot CLI – Optional command line interface used for quick prototyping with Groovy Spring – Spring Boot is fully Spring compatible. In fact, it is Spring, just with extra features and configuration added for convenience Customization – The fact that it is all opinionated and automatic does not stop you from overriding how it all works or is configured Summary Spring Boot is an extremely popular framework. If you want to work with microservices in the JVM eco-system, you should be aware of it. It is a key component of Spring Cloud which is one of the most mature ways of getting your microservices setup with Java. There is much more to learn about both Spring Cloud and Spring Boot, but before you dive in- I really recommend to setting up a Hello World project and playing with it a little. There is no learning substitute for experience!\n","permalink":"https://e4developer.com/posts/microservices-toolbox-spring-boot/","summary":"\u003cp\u003eThis is beginning of the series of blog posts where I will introduce and explain different tools and frameworks that are useful in microservices development. It is difficult to start such a series without introducing Spring Boot!\u003c/p\u003e\n\u003cp\u003e\u003cimg loading=\"lazy\" src=\"/posts/microservices-toolbox-spring-boot/images/spring-boot.png\"\u003e\u003c/p\u003e\n\u003cp\u003eMeet Spring Boot- framework which released its 1.0 version in 2014 and by now it is nearly synonymous with microservices in the Java world. Just look at these google trends statistics! Dropwizard (one of the initial competitors) and even the general microservices term are far less popular:\u003c/p\u003e","title":"Microservices Toolbox: Spring Boot"},{"content":"For these teams already using pull requests (or merge requests as they are sometimes known), this advice seems trivial. For the teams that do not use them- it may not be so simple. Let me explain why you need and how to get started.\nWhat is a Pull Request Pull requests is effectively a request to get your code merged with the rest of the source code. If you are using git via Github, Bitbucket, Gitlab- you have an automatic support for them. If you are using something else, maybe even different version control system than git- you may need to do some research for an equivalent process.\nWhat do you get from Pull Request The main benefit of doing a Pull Request is an organized, dedicated place for doing code reviews. Hopefully, you are already convinced that code reviews are important. I have seen people claiming that they are doing code reviews manually, or via some other process, but I have never seen anything as effective and simple as the Pull Request method. If you disagree and have something better let me know in the comments! I would love to know!\nThe other great benefit is improved history of the project. We all know that commits can have less than ideal messages and sometimes the big picture from a single commit is hard to see. What you get with a Pull Request is all the related commits put together with an overarching review and hopefully a decent explanation of the ultimate goal that they serve.\nPull Request also can serve as a quality gate before integrating developers code with the main code of the project. I’m talking here about the build and automated testing being started on the creation of a Pull Request. The benefits are quick to manifest themselves with much more stable codebase.\nFears before introducing Pull Requests I have seen the pull requests adoption a few times and there are often concerns raised. I will try to calm down the most common fears here:\n**This is a new process that will waste time:**I have never seen project takes more time and move slower after pull requests were introduced. Maybe at first it sounds tricky, but I found that virtually all developers quickly adopt it as their second nature. It is really quick and simple. **People will just end up blocking each other with conflicting changes:**This may happen, but it is either a problem that already exists in the team or misuse of the Pull Request process. If you have multiple people editing the exact same parts of the system- you are bound to have conflicts, Pull Requests just make them visible. You need to divide work better or explore ideas such as pair programming. If this really was not the case you may have developers creating gigantic 1000+ lines Pull Requests. This is not ideal, these should be small enough to be realistically reviewed and understood (of course there always can be a rare exception). **We already do code review and what is the point anyway?:**This is answered in the first part of the article! Summary and next steps I hope that you have been convinced of the amazing utility that Pull Requests bring! I can’t recommend using them enough. If you want to get started you may want to:\nSee how to deal with pull requests in your environment of choice: Github: https://help.github.com/articles/creating-a-pull-request/ Bitbucket: https://www.atlassian.com/git/tutorials/making-a-pull-request Gitlab: https://docs.gitlab.com/ee/gitlab-basics/add-merge-request.html Anything else – should be easy to google that! Adopt some workflow with your pull requests. I recommend: Gitflow, neatly explained here: https://danielkummer.github.io/git-flow-cheatsheet/ Github feature flow as explained here: https://guides.github.com/introduction/flow/ Have a meeting with your team about giving it a go and try using them for a month or so. I don’t think you will be coming back! Good luck with your pull requests!\n","permalink":"https://e4developer.com/posts/helping-your-team-start-using-pull-request/","summary":"\u003cp\u003eFor these teams already using pull requests (or merge requests as they are sometimes known), this advice seems trivial. For the teams that do not use them- it may not be so simple. Let me explain why you need and how to get started.\u003c/p\u003e\n\u003cp\u003e\u003cimg loading=\"lazy\" src=\"/posts/helping-your-team-start-using-pull-request/images/pulling-line.jpg\"\u003e\u003c/p\u003e\n\u003ch3 id=\"what-is-a-pull-request\"\u003eWhat is a Pull Request\u003c/h3\u003e\n\u003cp\u003ePull requests is effectively a request to get your code merged with the rest of the source code. If you are using git via Github, Bitbucket, Gitlab- you have an automatic support for them. If you are using something else, maybe even different version control system than git- you may need to do some research for an equivalent process.\u003c/p\u003e","title":"Helping your team - Start using pull request"},{"content":"I have started this blog to share my technical insight and passion with the wider development community. I would not be true to myself if I did not start by explaining why I enjoy working with microservices.\nYou are free to use modern technologies One of the big problems related to working with large, monolithic applications is that you are locked in with your technology. This problem is actually twice as bad as it may seem at first. The first part is that older, more proved framework are more likely to be chosen. That means you are unlikely to stay on the forefront of innovation, something that many developers value. The second part of the issue is that one something is chosen, it usually takes large effort (often too large) to make any significant change.\nWorking with microservices I had a pleasure to work with both Spring and Grails based microservices, making use of the latest versions of the respective frameworks. Seeing this modern technology in the enterprise is a refreshing breath of fresh air!\nEncapsulation and segregation of responsibility is natural One of the great promises of SOA (Service Oriented Architecture) were self-contained services. In a way black boxes. This is why I don’t see microservices as something drastically different than SOA, rather its modern incarnation. When we tried implementing SOA on the older style, large Java EE applications run on JBoss, Websphere it was too easy to make a mistake. You want to separate your services communicating via some message bus and then suddenly one developer decides: *“What is a call between friends?”.*I am joking here, but the abstraction was way too easy to break by someone making mistake or trying to cut corners.\nWith microservices, these boundaries are stricter. When things are run as separate microservices it is not so easy to make such a mistake. things are separated naturally. It results with cleaner abstraction and codebase that is nicer to work with.\nThe architecture is easier to see and enforce When dealing with microservices, the architecture is often quite obvious. Services are run on separate containers and often named quite well. There is no need to look into the documentation or read a copious amount of source code. This enables architects and whoever is responsible for this level of design on the project to have their vision implemented clearly. The fact that each part of the system is smaller and well separated makes it even possible to quickly review the code and make sure that nothing questionable is happening.\nExplicitly naming the microservices also gives developers and everyone else common, unambiguous way of talking about different components of the system.\nChanging code is much easier I found it much easier to change the code in a project where microservices were implemented than in an average monolith. Once again the focus of the service and the brevity are your biggest allies. It is also helpful that re-building, re-running tests and starting microservice is often much faster than doing similar work with a monolithic application.\nI did not mention here the fact that modern microservices framework such as Spring Boot or Grails have a laser focus on cutting down unnecessary configuration and boilerplate code that so often gets in the way of understanding. I believe that what Spring Boot has done is made microservices accessible to a wider group of developers, thanks to these efforts.\nThorough testing is possible and expected I have seen very few monoliths with an automated testing coverage that made everyone confident in the system stability after the release. Nearly always, there is a large manual testing/qa team required. For an average enterprise, it is very difficult to thoroughly automate such a large and complex system.\nMicroservices, when they define good contracts in their APIs, are not so intimidating. With a good level of unit testing, thoroughly tested endpoints and some integration testing where necessary, it is much easier to be confident that the system will behave as necessary.\nAnother benefit of having this sort of separation is that code change made in one service can’t affect what is happening in the remaining services. This drastically reduces the chance of unexpected errors. Of course in SOA this should be the case as well, but as I mentioned- SOA by the book always was a rare sight.\nSummary There are more benefits for adopting microservices and each developer will find some other reasons why certain architectures suit them. There are also many potential pitfalls that do not exist in other architectures- these will be discussed in another post. Enjoy your microservices!\n","permalink":"https://e4developer.com/posts/microservices-five-benefits-from-the-developer-perspective/","summary":"\u003cp\u003eI have started this blog to share my technical insight and passion with the wider development community. I would not be true to myself if I did not start by explaining why I enjoy working with microservices.\u003c/p\u003e\n\u003ch3 id=\"you-are-free-to-use-modern-technologies\"\u003eYou are free to use modern technologies\u003c/h3\u003e\n\u003cp\u003eOne of the big problems related to working with large, monolithic applications is that you are locked in with your technology. This problem is actually twice as bad as it may seem at first. The first part is that older, more proved framework are more likely to be chosen. That means you are unlikely to stay on the forefront of innovation, something that many developers value. The second part of the issue is that one something is chosen, it usually takes large effort (often too large) to make any significant change.\u003c/p\u003e","title":"Microservices - Five benefits from the developer perspective"},{"content":"I love working as a part of a great software development team. Thanks to my job, I also have a chance to lead such teams. In this series of blog posts titled Helping your teamI would like to explore different ideas and techniques to make sure that the team you are part of is performing at its best!\nBefore going into specific topic, it is important to realize that you do not have to be a team lead or a senior member of the team to incite change! Introducing good ideas and championing best practices can be done by anyone. As a lead, this is your responsibility.\nDrawing Together This is a very simple, but powerful technique. Number of times I have been debating a difficult design decision or trying to understand legacy code, where the people involved just could not see the same picture. In this case, there is nothing better than drawing an actual picture (with some pseudo UML) that gets everyone to understand what is being discussed. To help you see where drawing can be useful technique let me present you with a few situations:\nDiscussing/discovering your domain Good understanding of your domain is crucial to making good design decisions. If the domain model is anything but trivial, it is near impossible to envision it together without an actual picture. How do you draw it? Preferably not alone- you can work with someone who knows is an expert in the domain (business analyst/subject matter expert/developer who knows the domain) and do it either on paper or whiteboard if there are more people involved. If you want to use some software to document this work, do it after the drawing is done as not to unnecessarily slow down and confuse others.\nDesigning features, modeling your domain Before you commit to coding you should have a good idea how your solution should look like. There is nothing better here than a conceptual model that can quickly show you any logical inconsistencies. If you do it alone in your head, you may overlook important elements and make mistakes. The key here is having a medium that you can easily modify on the fly, so once again actual drawing is much better than a software application. I can’t stress enough how valuable this is. I had occasions where I would discuss designs with my friend Cesar (find him on twitter) and even after both coming with a pretty good ideas into the discussion, we would leave with something much better that we did not suspect was possible!\nUnderstanding legacy code This is always a challenge, especially common if you are software consultant and you are constantly seeing new (old) systems. Faced with a huge code base and scarce documentation, pen and paper are your best friends! Draw any main modules and interactions and as you start to understand better, zoom in more, add details and understanding. Often, once you built that map, a lot of decision start to make more sense (hopefully…). This again can be later documented if necessary to make the process easier for others.\nGeneral advice on what to draw Do not stress to much on drawing very precise UML. What you want is to convey message, so as long as you are understood, you are doing it right. I find a few kind of diagrams especially useful:\nClass diagrams– basically arrows and boxes. Showing the general connections between different modules/classes/domain objects. These give a general overview of what the thing that you are trying to represent looks like Sequence diagrams – showing specific system in motion. What calls what and what information is returned. They are usually more precise than class diagrams and used more often when discussing specific use cases Component diagrams– these are often higher level than class diagrams. Can include things such as hardware, physical objects, other systems etc. You can represent different system boundaries here as well. I am not very precise in my classification here (not UML precise at least), but hopefully, you get the idea. With these three basic types, you can get most of the information across. If something else works for your team- let me know in the comments!\nSummary Drawing together is a powerful activity. It works on many levels. Not only it can help you when faced with a difficult problem, but it can help the whole team as well. Do not underestimate the value from interactions that occur when people gather around a whiteboard (or whatever you have available). Share your experience with drawing with your team int he comments!\n","permalink":"https://e4developer.com/posts/helping-your-team-draw-together/","summary":"\u003cp\u003eI love working as a part of a great software development team. Thanks to my job, I also have a chance to lead such teams. In this series of blog posts titled \u003cem\u003eHelping your team\u003c/em\u003eI would like to explore different ideas and techniques to make sure that the team you are part of is performing at its best!\u003c/p\u003e\n\u003cp\u003eBefore going into specific topic, it is important to realize that you do not have to be a team lead or a senior member of the team to incite change! Introducing good ideas and championing best practices can be done by anyone. As a lead, this is your responsibility.\u003c/p\u003e","title":"Helping your team - Draw together!"},{"content":"It is 2018, January, time to decide what to do next. 2017 was an extremely eventful year for me. I became a British Citizen, bought a house and my daughter Eva was born. I don’t expect anything near that level of change in 2018, but who knows? There is one thing that I have been meaning to do for a while and it is to start my own blog. I am already writing for Scott Logic: http://blog.scottlogic.com/bjedrzejewski/ but this one is different. It is my place to share what I like and not necessarily strictly work related.\nOn that note- what am I planning to share here? The things that I love talking and learning about. Technology, working with people, maybe a little bit of chess (that’s where the e4 comes from!) for a change. I will share my honest opinion and experience- hoping that it may benefit others who are facing similar situations, or inspire to do something. I am not a professional writer, so this is going to be a bit of a learning curve for me- please bear with me as I figure this whole blogging thing out and make many mistakes along the way!\nTill next time!\n","permalink":"https://e4developer.com/posts/starting-a-blog-why/","summary":"\u003cp\u003eIt is 2018, January, time to decide what to do next. 2017 was an extremely eventful year for me. I became a British Citizen, bought a house and my daughter Eva was born. I don’t expect anything near that level of change in 2018, but who knows? There is one thing that I have been meaning to do for a while and it is to start my own blog. I am already writing for Scott Logic: \u003ca href=\"http://blog.scottlogic.com/bjedrzejewski/\"\u003ehttp://blog.scottlogic.com/bjedrzejewski/\u003c/a\u003e but this one is different. It is my place to share what I like and not necessarily strictly work related.\u003c/p\u003e","title":"Starting a blog - why?"},{"content":" Hi, I\u0026rsquo;m Bartosz Jedrzejewski I\u0026rsquo;m an Engineering Manager at Meta, where I work on AI/LLM infrastructure. Before that, I spent years building and scaling microservices, distributed systems, and engineering teams across fintech and enterprise software.\ne4developer is where I share my open and honest views on software development, technology, and working with people. The name - e4 - comes from a chess move. It\u0026rsquo;s how I start most of my games.\nWhat I write about This blog started in 2017 as a place to explore microservices, Java, and Spring Boot. Over the years it grew to cover software architecture, DevOps, AWS, and tech leadership. Now, as AI reshapes how we build software, I\u0026rsquo;m expanding into AI/LLM systems, distributed infrastructure, and engineering management - the things I work on every day.\nThe story so far I\u0026rsquo;ve written over 150 posts here. Some of the most popular ones - like HATEOAS - a simple explanation, Spring Boot Best Practices, and Please, stop writing so many for loops in Java! - continue to help developers years after they were published.\nConnect Twitter: @e4developer GitHub: bjedrzejewski LinkedIn: bartoszjedrzejewski If you\u0026rsquo;d like to get in touch, the best way is to reach out on Twitter or LinkedIn.\n","permalink":"https://e4developer.com/about/","summary":"\u003cdiv class=\"about-hero\"\u003e\n  \u003cimg src=\"/images/headshot.png\" alt=\"Bartosz Jedrzejewski\" class=\"about-avatar\"\u003e\n\u003c/div\u003e\n\u003ch2 id=\"hi-im-bartosz-jedrzejewski\"\u003eHi, I\u0026rsquo;m Bartosz Jedrzejewski\u003c/h2\u003e\n\u003cp\u003eI\u0026rsquo;m an Engineering Manager at \u003cstrong\u003eMeta\u003c/strong\u003e, where I work on AI/LLM infrastructure. Before that, I spent years building and scaling microservices, distributed systems, and engineering teams across fintech and enterprise software.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003ee4developer\u003c/strong\u003e is where I share my open and honest views on software development, technology, and working with people. The name - \u003cem\u003ee4\u003c/em\u003e - comes from a chess move. It\u0026rsquo;s how I start most of my games.\u003c/p\u003e","title":"About"}]