Posts

Showing posts from 2018

Micro Service Musings: Moving Architecture Forward

As I've begun to see project transition from single solution monoliths to smaller components such as micro services and service oriented architecture, one of the things I've noticed is that I've been worrying less and less about what the code may look like, and seeing the the bigger issue becomes how does the service integrate with the rest of the environment. Code quality will always be the pillar of software development, but what I mean is that much of the complexities of writing code has been moved away from within the service, and now is being described between how our subsystems related and interact with each other. Systems look a lot more streamlined and cleaner because they only contain code relevant to a particular portion of the project. As services become distributed across the environment, then we start to incorporate more advanced platform architecture such as event-sourcing, sagas, orchestrators that allow each of our systems to talk to each other and perform

Multi-Tenant Design: The Basics

Image
As your software system begins to grow and become a successful, you may find yourself at a place where you will want to monetize your software by allowing other companies to use it. One such design that will come in handy is called the Multi-Tenant design, and it allows you to license your software to other companies, allowing them to use your software system for through some contract. You manage the system, and they are merely an entity within it, a tenant. Source At its core, a multi-tenant design is driven by the idea that each tenant contains an identifier, which is then passed through with each request or action. By including this information, the system will know what behavior or data set to pull from. Traditionally the tenant identifier is GUID  that is transferred through HTTP requests as a HTTP header, x-tenant-id . The value is then kept within the thread context, and used to determine the configuration of a request, as well as the data it can access. Data should ei

C4 Model: Describing Software Architecture

Image
The point behind a presentation of Visualizing Software Architecture , by Simon Brown, was that we need to standardize the way software developers communicate with each other in respect to our discussions of software architecture. As part of Simon's consultation, one exercise he gave to software architects was a simple task to white board the architecture of a software system. The results he received varied immensely, and there was never a defacto standard format that he received from this exercise. When inspecting the feedback from the participants, the general answer was that it was easy to come up with the design, but difficult to properly put their ideas onto paper, or white board in this case. White Board Example, c4model.com Simon sought to start a movement by defining a set of organizational rules to help us describe our software architecture with a set of standard terminologies. His work has let to C4 Model , which stands for C ontext, C ontainers, C omponents,

Template WebApi: Request Validation

A part of the  Template WebAPI  series. Request validation can be a powerful tool for enforcing correct usage of your service. By setting up requirements for usage beforehand, you can prevent errors from occurring within your service that may create side effects. For our implementation, we'll be using FluentValidation , a library for strongly-typed validation rules. In this post, we'll go over: Installing FluentValidation Create a Validator Create a Filter Integrate with WebApi 1. Installing FluentValidation To install .NET Core FluentValidation, install the package: FluentValidation.AspNetCore 2. Creating a Validator A validator is a class that you create in order to define the rules for how you should validate your object. We define a CreateCustomerRequest which contains some basic properties. Then we define our validation rules by creating an inner class Validator. Going with this inner class instead of a separate file is to group like code together, redu

Template WebAPI: Defining External Configurations

A part of the  Template WebAPI  series. One design constraint that I wanted to enforce on the application is that the idea of app settings should stay in the within the API layer. But we still need a mechanism for transferring configurations form the API down to the various implementations. The best design I've come across is to define your configuration dependencies through interfaces, which you can type against your configuration file. Take a look at MongoConnector.cs, where we have to give it a connection string and database name in order to instantiate it. We wrap the configuration in the file MongoConfig. Then when we want to setup the service, we can define the configuration by referring to our app settings. Now we have define a configuration for an external service, without having to pass around our complete AppSettings file, and the configuration is abstracted away from the external services. Full configuration can be found on Github .

Running RabbitMQ with Docker

Image
Short post today, mostly just wanted to how to setup RabbitMQ for local development. Refer back to my previous post to learn more about RabbitMQ. Prerequisite Docker For complete guide reference, check out the Docker Hub Website for RabbitMQ . Why use Docker? Before starting my current job at iHerb, I would go straight for the Windows installation (my development environment) for all of my development tools needs. It was straight forward and "just worked" a lot of the time. What I've begun to realize experimenting with all these different tooling and services is that things can get a bit cluttered, you will have a bunch of supporting systems that need to be run - caches, queues, databases and it can be a hassle to manage them through window services and executables. Additionally some of these require additional dependencies outside of the executable itself, which prompts you to install even more tooling. The great thing about docker is that it defi

Template WebAPI: One Solution Structure to Rule Them All

A part of the Template WebAPI series. Solution Structure The first distinction that I want to make about the project is the solution structure and how to organize your code. As of the the date of this post, the package structure is as so: project src clients template.Api core template.Domain template.Application template.Persistence test project This folder is used to store any project assets that may need to be edited by the developer, such as ignore files or deployment files (such as docker). Files that generally don't get touched by developers can stay outside of the solution. src This folder contains all of the code used to execute the running application(s). clients Clients are the executable running projects that combine integrate the software into a single application that will be used by consumers. There may be many clients, such as support systems such as cache services, running jobs, or other systems that support your main application.

Uncle Bob's Clean Architecture

Image
The basis of my current understanding of architecture comes from the brilliant mind of Robert C. Martin (Uncle Bob). You can find the original post through his blog , which I strongly encourage you to read. In this post I'll try to summarize some of the key points I've understood from his post. Clean Architecture Source Clean architecture is a design used to emphasize the structure and relation of your various code components to promote the idea of Dependency Rule , such that "source code dependencies only point inwards" . Clean architecture is able to promote this by defining that your software system consists of various layers , that can be organized in such a way that the layers point in one direction, and a dependency graph will not have circular reference. The reason this is important is because it allows you to keep closely code related, and increase clarity of how components fit into your system. Explaining the Layers We can really separate out this

Template WebAPI: Experiences building WebAPI and how I would do it

Working in the .NET environment, you will quickly become familiar with Web APIs, which is the web server solution Microsoft provides as part of the .NET ecosystem. Building many APIs over the years, I've started to gains opinions about what I like and don't like about how systems are built. I'm happy to showcase my recent work, Core Template, which I've been refining and tinkering with to get it just the way I like it. The Core Template is an example micro service that one would build using .NET Core designed with Clean Architecture in mind. The goals of this template: Be easy to follow! Following the Dependency Rule - Source code dependencies should only point outward Work with pure entities, rather than depend on database objects Organize classes to be easy to find, and make sense Without further ado, please check it out on Github .  I will follow up with articles that explain my different design choices, and will continually keep it up to date as

Liveness and Readiness Check

Liveness and Readiness When designing a distributed system, one of the metrics you'll probably need to track is how healthy are your systems. In a monolith system, it is easy to determine when your system is down because a larger system will be unable to service requests, leading to immediate feedback for your errors. Moving to a micro service architecture splits that problem into many sub problems, as now you have many different systems that could possibly go down, be under more load than others, and generally be smaller such that you want to be able to scale small instances rather than rely on few large instances. This is where we introduce the concept of health checking instances, such that they can report metrics back to the infrastructure so that it can decide what it needs to do for optimal performance. Two common health checks are: Readiness and Liveness Readiness Readiness is that status that the application is in a state that it can begin servicing requests. Dur

Building a Resilient Cloud Application: The Network Layout

Image
This is the first of a series that I want to breakdown the different components of modern day applications that operate around the world, and has become easy to deploy with the cloud. The Network Layout Regions Around the world, the big cloud have graciously put the effort to build the infrastructure to host your applications in just about everywhere in the world. In order to organize a set of systems, each provider offers various "Regions" that you can set up your cluster. For example AWS has setup 16 different regions all over the world, from US-WEST-1 in Northern Calfornia, to US-EAST-2 in Ohio, all across the world to EU-CENTRAL-1 in Frankfurt to AP-SOUTH-1 in Mumbai. For a full list check out their availability regions listed here . Regions are responsible for isolating your application to a particular region. An application deployed to one region will not be available in another region. We use this principle to isolate failure and to have higher control of

The Scalability Cube

Image
I was just watching a YouTube video where they brought up an interesting diagram, which I realized summed up how to scale your service from an system design point of view. In the diagram, it has three axis: X-axis scaling (Horizontal Duplication) Y-axis scaling (Functional Decomposition) Z-axis scaling (Partitioning) Horizontal Duplication (X-axis) Probably the first thing that comes to mind when your thinking about scaling out your application. Horizontal Duplication means that you should build your application in such a way that you can have multiple instances of your application running and handling requests at the same time. To complete this strategy, a machine called a load balancer will usually sit in front of your cloned systems and forward incoming requests to your instances.  Stateless Design Stateless Design is the design principle that information is not stored from the communications between a user and an instance of your application, and this informa

Where Do I Want to Go?

I wanted to do a slightly different post today, and I wanted to talk about something that I thought was just as important to your career as technical knowledge, and that was your motivation. A few days ago I had an interesting conversation with a coworker about what motivates me to work like a dog, even though the company is the one that's reaping all of the benefits from my work. I told him the idea that motivated me is that " Don't think about it as your working for the man, think of it as I got somewhere to go, and this is the path I'm taking ." By turning in around, you can internalize that you are in control of your own fate, and that the actions you do will move you to where you want to be. Take the opportunities that life gives you and make it part of your journey. Once you start to think you are not in control of your own destiny, I believe that's when you become jaded and lose out on the opportunities that life gives you. It keeps me going durin

Docker, a Container Solution

As software develops and matures, so has the tooling around how we interact and use our software. One of those technologies that has started to become widely adopted is the usage of Containers for deployments. A container is the idea that you want to bundle all the necessary components of your system such that it will create a consistent run-time environment wherever it is deployed. Docker Docker is the most widely adopted container software in the industry. With docker, you specify all of the components of your system, which may include: - Application Code - Libraries and Dependencies - Frameworks and Run-times Which get bundled into one package that is deployed to your servers. How does Containers differ from Virtualization? Containers and Virtualization can solve some of the same problems, such as separating the domain of your applications and utilizing physical hardware more effectively, but they are implemented differently, and therefore have different benefits and dra

We Eating Good Tonight (Week 4)

Last week had a lot of surprises. I went into the iHerb interview thinking we were going to explore new technologies and white boarding scenarios as the meeting invite suggested, but instead it was a code fest, and I was expected to write down as much code as possible in a given amount of time.  Needless to say I didn't feel totally prepared, and I thought that I didn't do that great on that section. But apparently my expectations for myself and what they thought were different, and the recruiter said I continued to impress them. Following that I received the offer on Thursday, and today I signed it back. This means I will be moving on from my time at Tallan and starting my new career at iHerb! I think moving forward I'm going to tone down the amount of posts as I get ramped up on the new position, and also bring a bit of balance back to my life. I'll probably try to post 1-2 times a week. I have some written notes on Kubernetes and Docker, and a writeup on NoSQL

RabbitMQ: A Look into Messaging Queues

RabbitMQ and Messaging Queues Messaging queues plays a large role in the communication channels of large enterprise systems. A message queue/broker is an added system that stores a queue of messages between multiple systems.  Message queues will bring general consistency to your architecture by providing a delivery mechanism that ensures the proper delivery of your messages. To integrate message queues into your architecture, we're going to have to re-think how we connect our services together. RabbitMQ is a popular framework for implementing message queues within your system, although there are alternatives such as SQS or MSMQ. Components Messaging Queues consists of three main components. Publisher - This is a system that emits some message within your environment. The message is then stored to the message queue. The Message Queue - A system that stores the messages in a queue to be consumed by other systems in your environment. Consumer - A system that takes

Grinding like I need to hit Lvl 99 (Week 3)

You know what, I think I did a good job of keeping my expectations more tame this week. I set out to work on data structures, algorithms, and database theory and for the most part I was able to hit each of those topics, and get in some extra stuff.  As far my how my on-site interview with iHerb went, I think I did good, I was able to hit their programming question and system architecture questions while maintaining good composure.  Their response was a bit weird, in that they said they really liked me and wanted me to come in again. This could mean a couple things, one that they aren't sure if they want to hire me, or two they feel they haven't seen my ceiling, so they want to see what more I have in the tank.  Both are the exact opposite, so I'm hoping that I can figure that out tomorrow.  Either way I told them I could come in on Tuesday for a followup. As far as things I want to do for this week, continue to work on programming questions, look into implementation

Scaling Engineering - How Reddit was able to Triple in Size

This is a blogpost to summarize some of the key notes from the InfoQ video: Scaling @reddit Triple Team Size w/o Losing Control. I though this was a interesting video to take notes for as it discusses issues from a process and team size, and how to create a good environment for engineers to work in, and what we should strive towards. Even the best architected project will fail as a product if the management isn't there. Video Link:  https://youtu.be/u6hmMW_6fOw 1. Roles and Responsibilities The presenter's main point around this was that each person should have a defined scope of responsibility to his team, his peers, and his bosses. By defining ambiguity of roles, you are able to delegate tasks more effectively and specialize tasks for the right person. RACI (Responsibility Assignment Matrix) A RACI chart is suppose to relate the task to the position, in what part that position must do for that task.   R - Responsible, The person that does the task (1) A - Account

Improving the Performance of Web API

Taking notes from this blog: https://www.c-sharpcorner.com/article/important-steps-to-increasing-web-api-performance/ Thread Usage Parallel Programming - Executing a collection of tasks at the same time in order to maximum thread usage. Asynchronous Programming - Asynchronous programming indicates to the program when it needs a thread to execute, and gives it back when finished or waiting. You may do this so that long operations do not block the thread. Data Transfer and Serialization Compress the results of Web API - You may enable some settings on IIS or use a different protocol to decrease the transfer size. Using a high speed serializer - In order to transform your .NET classes to readable data, it must perform serialization on your data to convert it to a form readable over the web. There are special performance libraries such as JSON.net or others like ProtoBuf that are faster than then builtin serializer. Data Caching - If there is a chance that you don't have t

Types of Algorithms

A list of algorithm approaches you can use to solve many problems. Brute Force Also referred to as exhaustive search because it is a naive approach that looks at all possible solutions for the problem and chooses the best solution. Divide and Conquer There are two components to this algorithm. The first step is to determine if you can split your current problem into smaller sub-problems and then combine smaller sub-problems to find the answer. Decrease and Conquer This approach is used when you only have one sub-problem, but you can filter the input by a simpler function in order to decrease the amount of complexity of the problem. Greedy Approach This algorithm is used to find the approximate best answer for hard problems. What it does is at a given time, it will take the locally optimal choice with the intent for finding the optimal solution. Dynamic Programming Another approach of splitting up a complex problem into overlapping sub-problems, using the answer for the s

Big O - Cheat Sheet

Going to make myself a personal copy of a Big O cheat sheet, and try to explain why each data structure has certain properties to be a refresher for my data structure. Array (Dynamic) Access - O(1) Array's have quick access to a specified index because arrays take a finite contiguous block or memory that can be used to pinpoint memory locations. Search - O(n) Array's have linear time search because there are no special properties to indicate the value you are trying to look for, therefore you have to search through the elements individually. Insertion - O(n) Array's take up a contiguous block of memory, therefore when a new element is added to the list, it must be recreated, and/or the elements may be shifted. Deletion - O(n) Same as insertion in that it takes a contiguous block of memory, and therefore needs to be shifted. Space - O(n) Requires as much space as elements Stack Access -  O(n) Must traverse through the top reference to get to index.

New Week, We Still Out Here (Week 2)

Last week I got busy preparing for for soft skills to get ready for phone screen interviews. For that I did a lot of preparing for preparing answers about my current position, behavioral questions, and my goals. I think that has gone a long way in that I've been able to talk with multiple parties comfortably about where I come from and where I want to go. I also brushed up on a lot of phone screen, high level interview questions pertaining to object-oriented programming, system architecture, algorithms, recursion. This week my goal is to brush up on data structures, database theory, and algorithms in preparation for my onsite on Thursday.

Dictionaries, HashSets, HashTables, and HashBrowns

Yesterday I was asked a question about what the differences were between a dictionary, a hashset, and a hashtable.  While I was able to reason through part of the question, I wanted to follow up with a blog post describing how each of the data structures relate. Hashtables and Dictionary is a collection key-value data pairs that allows you to store a key and relate it back to a value. Hashsets implement the Set interface which maintains that the collection contains only unique elements. Differences between Hashtables and Dictionaries The differences between Hashtables and Dictionaries in C#: Dictionaries are: Generic, Not thread-safe, Output KeyValuePair for enumeration, potentially faster for value types (No unboxing-needed) Hashtables - Non-generic, thread-safe, Output DictionaryEntry for enumeration, potentially slower for value types. Hashtable can only store object types, therefore when working with value types boxing/unboxing must occur. Both are: Internally manged b

Tallan: A Mobile Experience

As part of my first series of posts about my experiences at Tallan, I'll be going over some of the first projects that I ever did after completing training, and how they shaped my development perspective. At Tallan we do everything, and that is one of the appeals that I had going into working with the company. I thought that the amount of exposure to different technologies would allow me to try out different things, and boy did I do that. During training one of the things they mentioned was that they suggested we join a "Practice" so that we can further develop our skills in a certain area that we could master as a way to focus ourselves in the field in addition to our regular duties. The "practice" that stuck out to me the most was the "Mobile Practice" as the director, Matt Kruczek (who became my mentor), sold us on the idea of building consumer facing products that really sounded with me at the time. And so I was able to join the exclusive club

New Week Who Dis (Week 1)

Going on each week I'm going to try and make some goals and I can write about what I learned.  I think there's an opportunity for an interview for a job, so I'm going to focus on some stuff that will help me with that. First thing I want to do is to review the projects that I've worked on and be able to talk about what I learned from them and how they contributed to my career. Second I want to investigate core .NET concepts, some common patterns for web services, and the differences between .NET framework and .NET core. With the rest of the time, I want to review YouTube videos that I've watched and pull down the concepts and issues and begin creating a schedule of topics that I can go more in depth to in later weeks.

N-Tier Applications

N-Tier Applications -- N-Tier application architecture is the pattern that separates concerns of your systems into a layer of systems.  For example, the most common N-Tier application is a three tier application, which consists of the presentation layer, the domain layer, and data layer. The Layers Presentation Layer - The layer of your system that interacts directly with the user. In web projects this is will usually be some sort of javascript framework such as Angular, React, or many others. Application Layer - The layer that encapsulates all your business logic, domain concepts, and interacts with the data layer. In mos web projects, this will usually consist of an HTTP server such as IIS or Apache, and RESTful endpoints such Web API, Spring, or Express. Data Layer - The layer that stores and managers the application data. Popular data stores can be sql or nosql based, such as SQL Server, MYSQL, MongoDB, Cassanda. The advantage of N-Tier applications is that it emphas

Attack Vector: Application DDoS

Quick review of Netflix article . Application DDoS There is a blogpost on Netflix which describes a new attack vector called Application DDoS. While traditional DDoS attacks rely on causing heavy network traffic to overload a system, application DDoS relies on heavy computation to bring down a microservice architecture. Let's start with how this is suppose to work. In a microservice architecture, you have a network of microservices that rely on each other. Calling one service can lead to that service calling multiple other services that call other services. This gives attackers the ability to make one request that actually makes many many more internal requests. By leveraging this idea, they can amplify their attack on the system. a single request in a microservices architecture may generate tens of thousands of complex middle tier and backend service calls This attack cannot be stopped by a traditional firewall because it may not know that the initial request is causing