Modern JavaScript Workflow Explained For Dinosaurs

0

Category : , , ,

Great article on the "most common" workflow used by front end developers and a brief overview of the tools used.
https://medium.com/the-node-js-collection/modern-javascript-explained-for-dinosaurs-f695e9747b70


JavaScript package managers(npm)

Bower was arguably the most popular in 2013, but eventually was overtaken by npm around 2015. (It’s worth noting that starting around late 2016, yarn has picked up a lot of traction as an alternative to npm’s interface, but it still uses npm packages under the hood.)

JavaScript module bundler (webpack)

In 2009, a project named CommonJS was started with the goal of specifying an ecosystem for JavaScript outside the browser. A big part of CommonJS was its specification for modules, which would finally allow JavaScript to import and export code across files like most programming languages, without resorting to global variables. The most well-known of implementation of CommonJS modules is node.js. As mentioned earlier, node.js is a JavaScript runtime designed to run on the server.

The most popular module bundler was Browserify, which was released in 2011 and pioneered the usage of node.js style require statements on the frontend (which is essentially what enabled npm to become the frontend package manager of choice). Around 2015, webpack eventually became the more widely used module bundler (fueled by the popularity of the React frontend framework, which took full advantage of webpack’s various features).

Transpiling code for new language features (babel)

Transpiling code means converting the code in one language to code in another similar language. This is an important part of frontend development — since browsers are slow to add new features, new languages were created with experimental features that transpile to browser compatible languages.

For CSS, there’s Sass, Less, and Stylus, to name a few. For JavaScript, the most popular transpiler for a while was CoffeeScript (released around 2010), whereas nowadays most people use babel or TypeScript. CoffeeScript is a language focused on improving JavaScript by significantly changing the language — optional parentheses, significant whitespace, etc. Babel is not a new language but a transpiler that transpiles next generation JavaScript with features not yet available to all browsers (ES2015 and beyond) to older more compatible JavaScript (ES5). Typescript is a language that is essentially identical to next generation JavaScript, but also adds optional static typing. Many people choose to use babel because it’s closest to vanilla JavaScript.

Using a task runner (npm scripts)

Now that we’re invested in using a build step to work with JavaScript modules, it makes sense to use a task runner, which is a tool that automates different parts of the build process. For frontend development, tasks include minifying code, optimizing images, running tests, etc.

In 2013, Grunt was the most popular frontend task runner, with Gulp following shortly after. Both rely on plugins that wrap other command line tools. Nowadays the most popular choice seems to be using the scripting capabilities built into the npm package manager itself, which doesn’t use plugins but instead works with other command line tools directly.

Micro Services Dependency Hell

0

Category : ,

Interesting series of articles from Postman, how they implemented Micro-Services architecture, issues encountered and solutions.

Part 1

https://medium.com/postman-engineering/conquering-the-microservices-dependency-hell-at-postman-with-postman-part-1-introduction-a1ae019bb934

Part 2

https://medium.com/postman-engineering/conquering-the-microservices-dependency-hell-at-postman-with-postman-part-2-7c825d576947

Decision Tree

Visual Studio + Team City deployment

0

Category :

High level overview:


On environment to be deployed to:

  • - create website on IIS
  • - Export publish settings file from website

Visual Studio

  • - create Build Conifguration
  • - create new Publish Profile by importing the publish settings file, remembering to rename the profile.
  • - create web config transform file

TeamCity

  • - create Project
  • - create build configuration
  • - add Version Control Settings
  • - add build steps


Possibly helpful article:

https://medium.com/monkii/how-to-deploy-asp-net-core-sites-using-teamcity-or-just-command-line-cf05fdee58f5
https://docs.microsoft.com/en-us/visualstudio/deployment/tutorial-import-publish-settings-iis?view=vs-2017

React Question App Overview

0

Category :

Developing Question Application with React

With your backend API up and running, you are finally ready to start developing your React application.

With the "create-react-app" tool, you can scaffold a new React application with just one command. As such, to create your React app, open a new terminal and go to the same directory where you created the qa-api Node.js app. From there, issue the following command:

# this command was introduced on npm@5.2.0
npx create-react-app qa-react

This will make NPM download and run create-react-app in a single command, passing to it qa-react as the desired directory for your new application.

# move into the new directory
cd qa-react

# start your React app
npm start

The last command issued above will start a development server that listens on port 3000 and will open the new app in your default web browser.

After seeing your app, you can stop the server by hitting
Ctrl + c 

Configuring the React Router in Your App

React Router is a very complete solution and, in your first React app, you will touch only the tip of the iceberg. If you do want to learn more about React Router, please, head to the official documentation.

Configuring Bootstrap in Your React App

To make your React app more appealing from the User Interface (UI) point of view, you are going to configure Bootstrap on it.

Creating a Navigation Bar in Your React App

create a component called NavBar (which stands for Navigation Bar), and you will add it to your React app.

Creating a Class Component with React

create a stateful component (a class component) to fetch questions from your backend and to show it to your users. To fetch these questions, you will need the help of another library, Axios. In a few words, Axios is a promise-based HTTP client for the browser and for Node.js. Note: This component is touching a topic that was not addressed in this article, the Lifecycle of React Components. In this case, you are just using one of the extension points provided by React, the componentDidMount method.

Routing Users with React Router

import React, { Component } from 'react';
import {Route} from 'react-router-dom';
import NavBar from './NavBar/NavBar';
import Question from './Question/Question';
import Questions from './Questions/Questions';

class App extends Component {
  render() {
    return (
      
); } } export default App;

In the new version of your App component, you are using two Route elements (provide by react-router-dom) to tell React when you want the Questions component rendered and when you want the Question component rendered.
More specifically, you are telling React that if your users navigate to / (exact path='/') you want them to see Questions and, if they navigate to /question/:questionId, you want them to see the details of a specific question.

Configuring an Auth0 Account

If you do not have one, now is a good time to sign up for a free Auth0 account. With your free account, you will have access to the following features:

Create Application
Set Allowed Callback URLs field

Securing your Backend API with Auth0

To secure your Node.js API with Auth0, you will have to install and configure only two libraries:
  • express-jwt: A middleware that validates a JSON Web Token (JWT) and set the req.user with its attributes.
  • jwks-rsa: A library to retrieve RSA public keys from a JWKS (JSON Web Key Set) endpoint.

both of them declare that they want to use checkJwt, which makes them unavailable to unauthenticated users. Second, both add a new property called author on questions and answers. These new properties receive the name (req.user.name) of the users issuing requests.

Securing your React App with Auth0

install only one library: auth0-js. This is the official library provided by Auth0 to secure SPAs like yours.

refactor your NavBar component to allow users to authenticate. So, open the NavBar.js file and replace its code

After refactoring the NavBar component, you will have to create a component to handle the callback. This componet will be responsible for 2 things:
First, it calls the handleAuthentication method to fetch the user information sent by Auth0.
Second, it redirects your users to the home page (history.replace('/')) after it finishes the handleAuthentication process. In the meantime, this component shows the following message: "Loading profile".

Adding Features to Authenticated Users

First, you will enable authenticated users to create new questions. Then, you will refactor the Question (singular) component to show a form so authenticated users can answer these questions.

Keeping Users Signed In after a Refresh

if you refresh your browser, you will notice that you will be signed out. Because you are saving you tokens in memory (as you should do) and because the memory is wiped out when you hit refresh.

Luckily, solving this problem is easy. You will have to take advantage of the Silent Authentication provided by Auth0. That is, whenever your application is loaded, it will send a silent request to Auth0 to check if the current user (actually the browser) has a valid session.

You will have to change a few configurations in your Auth0 account.
Add your url http://localhost:3000 to Allowed Web Origins, as your app is going to issue an AJAX request to Auth0.
Add your url http://localhost:3000 to Allowed Logout URLs, To enable users to end their session at Auth0, you will have to call the logout endpoint. Similarly to the authorization endpoint, the log out endpoint only redirects users to whitelisted URLs after the process.

Now update the src/Auth.js file to implement silent login when component has loaded

There are a lot of links on the original article that should be used to investigate further and delve deeper into the technologies used.

my thanks to this great blog post:
https://auth0.com/blog/react-tutorial-building-and-securing-your-first-app/

Developing a Backend API with Node.js and Express

0

Category :

In the following sections, you will build a simple Q&A (Question & Answer) app that will allow users to interact with each other asking and answering questions. To make the whole process more realistic, you will use Node.js and Express to create a rough backend API.

Developing a Backend API with Node.js and Express

build a backend API to support your Q&A app. In this section, you will use Express alongside with Node.js to create this API. With this library, as you will see here, you can quickly build apps to run on servers
# use NPM to start the project
npm init -y

this will create a file called package.json inside your directory. This file will hold the details (like the dependencies) of your backend API.

npm i body-parser cors express helmet morgan
This command will install five dependencies in your project:
  1. body-parser:
    This is a library that you will use to convert the body of incoming requests into JSON objects.
  2. cors:
    This is a library that you will use to configure Express to add headers stating that your API accepts requests coming from other origins. This is also known as Cross-Origin Resource Sharing (CORS).
  3. express:
    This is Express itself.
  4. helmet:
    This is a library that helps to secure Express apps with various HTTP headers.
  5. morgan:
    This is a library that adds some logging capabilities to your Express app.

Also, you will see a new file called package-lock.json. NPM uses this file to make sure that anyone else using your project (or even yourself in other environments) will always get versions compatible with those that you are installing now.

Then, the last thing you will need to do is to develop the backend source code. So, create a directory called src inside your qa-api directory and create a file called index.js

//import dependencies
const express = require('express');
const bodyParser = require('body-parser');
const cors = require('cors');
const helmet = require('helmet');
const morgan = require('morgan');

// define the Express app
const app = express();

// the database
const questions = [];

// enhance your app security with Helmet
app.use(helmet());

// use bodyParser to parse application/json content-type
app.use(bodyParser.json());

// enable all CORS requests
app.use(cors());

// log HTTP requests
app.use(morgan('combined'));

// retrieve all questions
app.get('/', (req, res) => {
  const qs = questions.map(q => ({
    id: q.id,
    title: q.title,
    description: q.description,
    answers: q.answers.length,
  }));
  res.send(qs);
});

// get a specific question
app.get('/:id', (req, res) => {
  const question = questions.filter(q => (q.id === parseInt(req.params.id)));
  if (question.length > 1) return res.status(500).send();
  if (question.length === 0) return res.status(404).send();
  res.send(question[0]);
});

// insert a new question
app.post('/', (req, res) => {
  const {title, description} = req.body;
  const newQuestion = {
    id: questions.length + 1,
    title,
    description,
    answers: [],
  };
  questions.push(newQuestion);
  res.status(200).send();
});

// insert a new answer to a question
app.post('/answer/:id', (req, res) => {
  const {answer} = req.body;

  const question = questions.filter(q => (q.id === parseInt(req.params.id)));
  if (question.length > 1) return res.status(500).send();
  if (question.length === 0) return res.status(404).send();

  question[0].answers.push({
    answer,
  });

  res.status(200).send();
});

// start the server
app.listen(8081, () => {
  console.log('listening on port 8081');
});
With this file in place, you are good to go. To run your app, just issue the following command:
# from the qa-app directory
node src

Test if everything is really working, open a new terminal and issue the following commands:

# issue an HTTP GET request
curl localhost:8081

# issue a POST request
curl -X POST -H 'Content-Type: application/json' -d '{
  "title": "How do I make a sandwich?",
  "description": "I am trying very hard, but I do not know how to make a delicious sandwich. Can someone help me?"
}' localhost:8081

curl -X POST -H 'Content-Type: application/json' -d '{
  "title": "What is React?",
  "description": "I have been hearing a lot about React. What is it?"
}' localhost:8081

# re-issue the GET request
curl localhost:8081
With your backend API up and running, you are finally ready to start developing your React application.

my thanks to this great blog post:
https://auth0.com/blog/react-tutorial-building-and-securing-your-first-app/

React Basics Intro

0

Category :

React and the JSX Syntax

React uses a funny syntax called JSX. JSX, which stands for JavaScript XML, is a syntax extension to JavaScript that enables developers to use XML (and, as such, HTML) to describe the structure of the user interface.

function showRecipe(recipe) {
  if (!recipe) {
    return 

Recipe not found!

; } return (

{recipe.title}

{recipe.description}

); }

In this case, the showRecipe function is using the JSX syntax to show the details of a recipe (i.e., if the recipe is available) or a message saying that the recipe was not found.

React Components

Components in React are the most important pieces of code. Everything you can interact with in a React application is (or is part of) a component. For example, when you load a React application, the whole thing will be handled by a root component that is usually called App.

The biggest advantage of using components to define your application is that this approach lets you encapsulate different parts of your user interface into independent, reusable pieces. Having each part on its own component facilitates reasoning, testing, and reusing each piece easily.

Defining Components in React

there are two types of React components

functional components
are simply "dumb" components that do not hold any internal state (making them great to handle presentation)
function UserProfile(props) {
  return (
    

{props.userProfile.name}

); }

class components
Are more complex components that can hold internal state. For example, if you are creating a component that will only show the profile of the user that is authenticated, you can create a functional component as follows:

However, if you are going to create a component to handle things that need to hold some state and perform more complex tasks, like a subscription form, you will need a class component. To create a class component in React, you would proceed as follows:
class SubscriptionForm extends React.Component {
  constructor(props) {
    super(props);

    this.state = {
      acceptedTerms: false,
      email: '',
    };
  }

  updateCheckbox(checked) {
    this.setState({
      acceptedTerms: checked,
    });
  }

  updateEmail(value) {
    this.setState({
      email: value,
    });
  }

  submit() {
    // ... use email and acceptedTerms in an ajax request or similar ...
  }

  render() {
    return (
      
{this.updateEmail(event.target.value)}} value={this.state.email} /> {this.updateCheckbox(event.target.checked)}} />
) } }

this component is defining three input elements (actually, two input tags and one button

this component is defining an internal state (this.state) with two fields: acceptedTerms and email

my thanks to this great blog post:
https://auth0.com/blog/react-tutorial-building-and-securing-your-first-app/

Data Concurrency

0

Category : ,

Concurrency Conflicts

A concurrency conflict occurs when one user displays an entity's data in order to edit it, and then another user updates the same entity's data before the first user's change is written to the database.

Pessimistic Concurrency (Locking)

before you read a row from a database, you request a lock for read-only or for update access. If you lock a row for update access, no other users are allowed to lock the row either for read-only or update access. If you lock a row for read-only access, others can also lock it for read-only access but not for update. complex to program. requires significant database management resources can cause performance problems as the number of users of an application increases The Entity Framework provides no built-in support for it

Optimistic Concurrency

Optimistic concurrency means allowing concurrency conflicts to happen, and then reacting appropriately if they do. There are 3 different approaches:
  1. You can keep track of which properties conflicting users have modified and update only the corresponding columns in the database. Can't avoid data loss if competing changes are made to the same property of an entity. Often not practical in a web application, because it can require that you maintain large amounts of state in order to keep track of all original property values for an entity as well as new values.
  2. Client Wins or Last in Wins allow all changes to happen, if you don't do any coding for concurrency handling, this will happen automatically.
  3. Store Wins prevent 2nd user's change from being updated in the database. Typically, you would display an error message, show this user the current state of the data, and allow her to reapply her changes if she still wants to make them.
my thanks to this article on handling data concurrency with Entity Framework:
https://docs.microsoft.com/en-us/aspnet/mvc/overview/getting-started/getting-started-with-ef-using-mvc/handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application

Career development

0

Category : ,

Learning & Improving

The best way I know of to excel in software is to keep a growth mindset. The more you can learn and improve, the more career options you'll have.

Improvement can be broken down along two lines: input and output. I've tended to go through phases in my career where I've done more input than output, and (less often) vice versa, but I've found both to be essential for improvement.

Input

There are innumerable ways to gain knowledge: articles, books, papers, MOOCs, conversations, experimentation, project documentation, access to experts who are willing to answer questions, and many more. Quality and efficiency vary across these

Personally, I've gotten a ton of value out of reading books and applying what I've learned. concentrating your reading more on classic books that'll help you throughout your career. Book authors have spent many months, maybe years, distilling their years of experience into a well-crafted format that you can inhale over a few weekends.

I'll mostly stick to broad areas here rather than specific book recommendations, but you can check out my Goodreads profile for more specifics if you're curious.

Learn about software career development. For broad career development issues, I'll make an exception in this post and highly recommend Apprenticeship Patterns, The Pragmatic Programmer, and The Passionate Programmer. Bluntly, if you were to stop reading this post right now and go read those three books instead, I'd think you made a wise decision.

Learn about code. Again, I prefer books to blog posts.
Learn about the specific languages and frameworks that you're concentrating on.
Learn about software design & architecture principles and patterns.
Learn about test-driven development and refactoring.
Learn about the platform you're deploying to.
Learn about foundational computer science concepts.

Learn about communication. There's a lot of material out there, both in the software space and outside. The way you communicate is a crucial part of the way others perceive you, whether it's in conversations, in emails, or in documentation. Learn about logical fallacies, cognitive biases, and nonviolent communication. Look for materials on communicating in effective writing, presentations, and difficult conversations.

Read open-source code. My experience is that reading OSS code is most effective when you've got a specific purpose in mind. But it can also be instructional to clone a repository from GitHub, run the tests, try and reproduce issues in the issue tracker, etc. At worst, you get an idea of how someone else codes and what's easy or hard for you to understand. And at best, you'll find yourself leaning over into the output side of things, which we'll discuss next.

Output

Write code. This isn't necessarily the most important skill you'll need in your software career, but it's certainly the one that's most unique to the role of a software developer. There are lots of ways to practice writing code, but my personal favorite is doing code katas.

Code katas have a couple of main schools of thought, and I like both of them for different reasons:

  • Repetition. By solving the same problem many times with the same or similar solutions, you get to understand the problem really well. This gives you a sense of mastery, which I've found to be great for my emotional well-being when I'm mostly frustrated by programming problems. You can treat these as either a meditation, where you're thinking deeply, or as a race, where you're honing your skills with the tools. Some folks even treat repetition katas as a performance, with or without musical or spoken accompaniment.
  • New problems/solutions. The idea here is that most of the coding part of the job is about thinking about and solving new problems, ones that might be related to ones you've seen before but aren't identical. With new problem katas, you're intentionally putting yourself in the uncomfortable position of not-knowing, where we each need to learn to be OK as we're moving toward solutions. You can find lots of examples of bite-sized katas—many have ridiculous names, but they can still be great exercises to work through.

Katas are typically fairly constrained problems compared with "the real world"—more on the order of a single sitting, though some can get quite tricky and take much longer.

Breakable toys are much more realistic and larger-scope projects than katas, but they still aren't mission-critical.So it's OK if they get broken as you're learning. Having a breakable toy lets you see how different ideas work in the long term, which is super-helpful in the work world.

Open source software can be pretty intimidating to get into, but there are a few reasons I still think it's a good idea to look at. First, you can get practice learning a new codebase. Second, for many (not all) open-source projects, the level of code quality is pretty high. You'll get a chance to see how experienced folks approach problems and solve them, by browsing issue and pull request history.

Learn to use the project really well if you don't already. This will force you to go through the existing documentation, and you'll probably find some low-hanging fruit: typos, grammar issues, hard-to-understand explanations, etc. I've also heard good things about OSS issue aggregators like OpenHatch and Up for Grabs for finding projects that fit the bill here.

Practice empathy and communication in your daily interactions. When little misunderstandings occur, ask yourself how they happened and what you can do in the future to prevent them. And how can you get past the misunderstanding with everyone feeling better afterward than before. When you're talking, ask yourself what your audience needs from you—do they need all the details, or do they mostly care about the highest-level bits? These skills are among the most crucial for any job that requires collaboration.

Managing your time

decide what your goals are at a high level, and find ways to move in that direction incrementally. One idea is to take a tip from agile project management: take your big hairy audacious goal (product), figure out the first step or two towards it (releases), break the first one down to the first few medium-sized goals toward it (epics), and break those down into goals that you can knock out in a single sitting (stories). In this way, you can fit learning into the space that's available, rather than having to rely on luxuries like wide-open nights and weekends.

Find mentors

We all need feedback. Without feedback on how we're progressing, it's easy to divorce our own self-assessment from reality, and to think that we're doing much worse or better than we really are.
try to find at least one person who can help guide you in your progress. My first real mentor didn't have a formal mentoring role, but he was the developer in the next cube, he was a few years ahead of me in experience, and he understood how to communicate the basics. Between his direct feedback and the books he told me to read
You can find mentors outside the workplace, too.
  • CodeNewbie is a Twitter chat, podcast, blog, and community with tons of great ideas about breaking into and excelling in this industry, with participants at all levels.
  • Social media: Despite its [my] faults, I've found Twitter to be particularly well-populated with software experts sharing tips, links, and stories. And some people are willing to answer questions directly—others have too many folks jostling for their attention, so don't feel too bad if they can't make time for you directly.
  • User groups: If you're in an area that has them and you can make the timing work, user groups can be a great place to meet other folks who are excited about software. Check out Meetup for topics you might be interested in. Introduce yourself and make connections—if you're like me this will be hard, but you might just meet your next employer here.
  • Open source software: The people reviewing your pull requests are already giving you direct feedback! Maybe not career-level stuff, but they might be up for that too as you contribute more.

the name of the relationship isn't all that important—what you really want here is to get external feedback and make progress toward your goals.

my thanks to this fantastic blog post:
https://8thlight.com/blog/colin-jones/2017/10/24/advice-for-early-career-developers.html

Clean Architecture

0

Category :

Alot of modern architectures(a lit of which is on the original post referenced at the bottom) are trying to achieve the same objective, which is the separation of concerns. They all achieve this separation by dividing the software into layers. Each has at least one layer for business rules, and another for interfaces.

Each of these architectures produce systems that are:

  1. Independent of Frameworks. The architecture does not depend on the existence of some library of feature laden software. This allows you to use such frameworks as tools, rather than having to cram your system into their limited constraints.
  2. Testable. The business rules can be tested without the UI, Database, Web Server, or any other external element.
  3. Independent of UI. The UI can change easily, without changing the rest of the system. A Web UI could be replaced with a console UI, for example, without changing the business rules.
  4. Independent of Database. You can swap out Oracle or SQL Server, for Mongo, BigTable, CouchDB, or something else. Your business rules are not bound to the database.
  5. Independent of any external agency. In fact your business rules simply don’t know anything at all about the outside world.

The diagram at the top of this article is an attempt at integrating all these architectures into a single actionable idea.

The Dependency Rule

The concentric circles represent different areas of software. In general, the further in you go, the higher level the software becomes. The outer circles are mechanisms. The inner circles are policies.

The overriding rule that makes this architecture work is The Dependency Rule. This rule says that source code dependencies can only point inwards. Nothing in an inner circle can know anything at all about something in an outer circle. In particular, the name of something declared in an outer circle must not be mentioned by the code in an inner circle. That includes, functions, classes. variables, or any other named software entity.

By the same token, data formats used in an outer circle should not be used by an inner circle, especially if those formats are generate by a framework in an outer circle. We don’t want anything in an outer circle to impact the inner circles.

Entities

Entities encapsulate Enterprise wide business rules. An entity can be an object with methods, or it can be a set of data structures and functions. It doesn’t matter so long as the entities could be used by many different applications in the enterprise.

If you don’t have an enterprise, and are just writing a single application, then these entities are the business objects of the application. They encapsulate the most general and high-level rules. They are the least likely to change when something external changes. For example, you would not expect these objects to be affected by a change to page navigation, or security. No operational change to any particular application should affect the entity layer.

Use Cases

The software in this layer contains application specific business rules. It encapsulates and implements all of the use cases of the system. These use cases orchestrate the flow of data to and from the entities, and direct those entities to use their enterprise wide business rules to achieve the goals of the use case.

We do not expect changes in this layer to affect the entities. We also do not expect this layer to be affected by changes to externalities such as the database, the UI, or any of the common frameworks. This layer is isolated from such concerns.

We do, however, expect that changes to the operation of the application will affect the use-cases and therefore the software in this layer. If the details of a use-case change, then some code in this layer will certainly be affected.

Interface Adapters

The software in this layer is a set of adapters that convert data from the format most convenient for the use cases and entities, to the format most convenient for some external agency such as the Database or the Web. It is this layer, for example, that will wholly contain the MVC architecture of a GUI. The Presenters, Views, and Controllers all belong in here. The models are likely just data structures that are passed from the controllers to the use cases, and then back from the use cases to the presenters and views.

Similarly, data is converted, in this layer, from the form most convenient for entities and use cases, into the form most convenient for whatever persistence framework is being used. i.e. The Database. No code inward of this circle should know anything at all about the database. If the database is a SQL database, then all the SQL should be restricted to this layer, and in particular to the parts of this layer that have to do with the database.

Also in this layer is any other adapter necessary to convert data from some external form, such as an external service, to the internal form used by the use cases and entities.

Frameworks and Drivers.

The outermost layer is generally composed of frameworks and tools such as the Database, the Web Framework, etc. Generally you don’t write much code in this layer other than glue code that communicates to the next circle inwards.

This layer is where all the details go. The Web is a detail. The database is a detail. We keep these things on the outside where they can do little harm.

Only Four Circles?

No, the circles are schematic. You may find that you need more than just these four. There’s no rule that says you must always have just these four. However, The Dependency Rule always applies. Source code dependencies always point inwards. As you move inwards the level of abstraction increases. The outermost circle is low level concrete detail. As you move inwards the software grows more abstract, and encapsulates higher level policies. The inner most circle is the most general.

Crossing boundaries.

At the lower right of the diagram is an example of how we cross the circle boundaries. It shows the Controllers and Presenters communicating with the Use Cases in the next layer. Note the flow of control. It begins in the controller, moves through the use case, and then winds up executing in the presenter. Note also the source code dependencies. Each one of them points inwards towards the use cases.

We usually resolve this apparent contradiction by using the Dependency Inversion Principle. In a language like Java, for example, we would arrange interfaces and inheritance relationships such that the source code dependencies oppose the flow of control at just the right points across the boundary.

For example, consider that the use case needs to call the presenter. However, this call must not be direct because that would violate The Dependency Rule: No name in an outer circle can be mentioned by an inner circle. So we have the use case call an interface (Shown here as Use Case Output Port) in the inner circle, and have the presenter in the outer circle implement it.

The same technique is used to cross all the boundaries in the architectures. We take advantage of dynamic polymorphism to create source code dependencies that oppose the flow of control so that we can conform to The Dependency Rule no matter what direction the flow of control is going in.

What data crosses the boundaries.

Typically the data that crosses the boundaries is simple data structures. You can use basic structs or simple Data Transfer objects if you like. Or the data can simply be arguments in function calls. Or you can pack it into a hashmap, or construct it into an object. The important thing is that isolated, simple, data structures are passed across the boundaries. We don’t want to cheat and pass Entities or Database rows. We don’t want the data structures to have any kind of dependency that violates The Dependency Rule.

For example, many database frameworks return a convenient data format in response to a query. We might call this a RowStructure. We don’t want to pass that row structure inwards across a boundary. That would violate The Dependency Rule because it would force an inner circle to know something about an outer circle.

So when we pass data across a boundary, it is always in the form that is most convenient for the inner circle.

Conclusion

Conforming to these simple rules is not hard, and will save you a lot of headaches going forward. By separating the software into layers, and conforming to The Dependency Rule, you will create a system that is intrinsically testable, with all the benefits that implies. When any of the external parts of the system become obsolete, like the database, or the web framework, you can replace those obsolete elements with a minimum of fuss.



my thanks to this great blog post:
https://8thlight.com/blog/uncle-bob/2012/08/13/the-clean-architecture.html

Learn “Why” not “How”

0

Category :

Fleeting Knowledge - Language Specific

That would be a framework’s API and syntax. Knowing those nitty gritty details of Angular and its API is necessary to be productive in an Angular codebase, but it’s useless in any other codebase. I refer to this type of knowledge as the “how” as in, “How do I do X in Angular?”.

Enduring Knowledge

The more enduring kind of knowledge is the “why”. Why does Angular exist? Why does it have the features that it has? What problems is it trying to solve?
Frameworks like Angular may have a short lifespan, but the problems they tackle live on for much longer. Angular was built to tackle the problem of writing, maintaining, and iterating on complex web apps. Any knowledge that helps us approach this problem will be valuable for a long time. In fact, it’s not a huge stretch to swap “web apps” for “software”, and suddenly we’ve got a problem that’s going to last an entire career! That’s the kind of knowledge we want — the enduring kind.

Diving Deep

Asking “why” and getting to those enduring nuggets of knowledge isn’t always easy unfortunately. The context and motivation behind a particular feature is the kind of knowledge that’s harder to come by, but it’s well worth the effort. Let’s go through an example.

Say I google “Why should I use Angular”. I might run into reasons like this:
  1. MVC and separation of concerns
  2. Two-way data binding
  3. Dependency Injection
If I don’t know what MVC and dependency injection are and why they’re good practices, then I’m simply left with more questions. Why would I want dependency injection? And why is separation of concerns useful?
So let’s keep digging (this is super simplified):
  1. Angular uses dependency injection…
  2. …and dependency injection is useful for writing unit tests
  3. …and unit tests are useful for maintaining and iterating on software

Got it! Now I learned about dependency injection, which a generic design pattern that’ll be useful to me past Angular’s lifetime. This knowledge also helps me better understand how and when to use that Angular feature, and when I can ignore it. Practical knowledge for our Angular work today, and enduring design pattern knowledge for the future. Win-win!

In general, asking why is a recursive process, just like installing a software dependency. We have to follow all the recursive dependencies until they all resolve, otherwise we end up with broken software or in this case, incomplete understanding. We have to continually dig deeper until we hit a layer whose value we understand. Kids know this intuitively!

Your brain’s dependency tree

Here’s how I (simplistically) imagine the “Why Angular” knowledge dependency tree:
  1. At the top level we have the most specialized and most fleeting solutions: frameworks like Angular.
  2. One level deeper, we get into specific patterns that are common across different frameworks: DI, MVC, etc.
  3. As we dive deeper, we get into more fundamental software engineering practices, and so on until we arrive at core problems in software engineering.

I love this visualization because it exposes the deeper level concepts for what they are: building blocks. Once we dive deep into a framework that uses MVC and learn what MVC is all about, then we’ll carry that understanding over to any other MVC framework we use in the future. It’s just like when a package manager installs a new package and finds one of the dependencies already installed — it can just reuse it.

Having more building blocks doesn’t just makes us faster learners, it’s also essential for innovating new solutions. Almost every new piece of software is inspired by many other ideas before it. One example is redux, which is inspired by Flux, CQRS, Event Sourcing, and probably more (perhaps Clojure atoms). We stand on the shoulders of giants not just by using the software they built, but by understanding the ideas they introduced.

Personal Reflection

When working on a project the path of least resistance can be copy, paste and ship. But isn’t the best idea for personal growth.

You can get more value from these projects by:
  1. understanding the frameworks(Angular’s) decisions within the context of the problems it was solving
  2. asking why and not being afraid of following the rabbit holes
  3. understanding the motivations and tradeoffs behind the Angular features
We should learn frameworks not just to build stuff, but to learn new ideas because once a framework is out of use, the ideas are all that’s left. But when a framework doesn’t teach us anything new, I am reminded of Alan Perlis’s quote:

  • “A language [or framework] that doesn’t affect the way you think about programming, is not worth knowing.”

Even if frameworks come and go, we can still learn the ideas behind them and become better engineers, as long as we dive deep enough.




my thanks to this great blog post:
https://hackernoon.com/how-ages-faster-than-why-712e25c9eb3b

Writing Stable Code

0

Category :

In software, we use version numbers to signal that something has changed.

But when it comes to software version numbers, the current leader in version numbering schemes is SemVer (or Semantic Versioning). Don't be fooled, though! Many people claim to know how SemVer works, but have never read the specification. Since this is a critical piece of what we are about to talk about, here is a summary of the spec:

Version numbers take the form X.Y.Z, sometimes augmented with additional pre-release and build information: X.Y.Z-AAA#BBB. And each of those fields means something well defined and specific.

SemVer Summary

  1. X is the major number. Changes in this indicate breaking changes to the API (and/or behavior).
  2. Y is the minor number. Changes to this number indicate that new features were added, but that no APIs are broken as a result.
  3. Z is the patch version. Changes to this indicate that internal changes were made, but that no changes (even compatible changes) were made to the API.

Countless projects use a format that looks like SemVer, but many of them ignore the semantics behind the version number. Often, it seems that version numbers are incremented by "gut feel" instead of any consistent semantic: "This feels like a minor version update."

Why use versioning

Version numbers help your users understand something about the nature of the changes they can expect. If you don't follow a pattern, they are left guessing. And this frustrates people.

Following SemVer introduces rigor on two fronts:

  1. It sends clear signals to users about the depth of changes they can expect in a release.
  2. It sends a clear signal to your developers about what is, and what is not, allowed when it comes to changing the code.

I cannot understate the importance of (2). SemVer helps us impose self-discipline, which in turn minimizes both internal and external disruption.

Reorganizing, Refactoring, and Renaming

If you reorganize the package structure of your public API, or if you do a major renaming, or if you choose to change the methods/structs/classes/etc of your public API, you must increment the major version number.
Such changes mean that anyone who's using your code will experience breakage.

It's okay to make internal changes that don't touch any public API items. So minor internal-only refactoring can be done in minor, and even patch, releases (though we don't recommend doing it in patch releases).

So in effect, the following are not be be changed except during major updates:

  1. Package structure
  2. Public class, struct, enum, trait, interface, etc. names, nor the names of any of the items on these
  3. Constants or public variable names or values
  4. Function/method names
  5. Function/method signatures for existing functions except where the change is additive and the added argument is optional. Return value types and exceptions must also not change.

Introducing New Features

Minor versions may introduce new features, but features must be introduced without breaking existing APIs.

Features are additive in nature: They bring new things, but do not modify or delete existing things.

Deprecating

  1. Mark a thing as deprecated as soon as it is considered deprecated, even if that is a patch or minor release. Deprecation, after all, is a warning condition, not an error condition.
  2. Do not change the behavior of the deprecated thing during minor or patch releases
  3. Remove deprecated things only at major version changes. Until that time, you're still on the hook for supporting them.

Deprecation is a signal that in the future a thing will be removed. But it is not an excuse to change, delete, or ignore the functionality of that bit of code outside of the SemVer constraints.

Errors and Exceptions

Do not make casual changes to what exceptions are thrown and when.

Bugs and Security Fixes: How To Handle Real Life

When the real world comes crashing in, we make exceptions. But professional software developers make them wisely and carefully.

The important concept here is the minimally invasive change. That is, when patching bugs or security releases, we may need to change the API, but we should do it by changing the absolute minimum number of things we can get away with. And we do that even if it means sacrificing our "architectural purity".

Conclusion

The professional software developer has long-term usability and stability as a goal. Yes, well-architected code is important. But there is a time and place for making that your focus. And maintenance releases (minor and patch versions) are not an occasion to refactor, re-organize, or make sweeping modifications.

Be conscientious about how much effort the users of your code put into using your code. I can tell you from experience what we do when the maintenance burden you impose on us gets wearying: We stop using your tools (or we fork them).

SemVer is a communications tool. But to use it well, we must use it accurately. And that means writing code focused on stability.



my thanks to this great article:

http://technosophos.com/2018/07/04/be-nice-and-write-stable-code.html

JMeter Intro

0

Category : ,

How to run JMeter:

  • 1) Download JMeter Archive,
  • 2) Unzip the archive,
  • 3) Run jmeter executable in bin/ folder. Executable extensions depends on OS (.bat for Windows, .sh for Linux / MaC)
  • 4) JMeter should launch the UI!

Create Request

1)Add http Headers ie Sampler

Right-click on the TestPlan Add->Config Element->Http Header Manager

Add necessary http headers.

2) Create a ThreadGroup by right clicking on the Test Plan

Right-click on the TestPlan Add -> Threads (users) -> Thread Group.

The ThreadGroup is a container for the logic which will be run by a user thread.

3)Add http Request

Right-click on the ThreadGroup, then select Add -> Sampler -> Http Request.

Add server name, http method, path and parameters etc here

4) Add results viewers ie listeners

Add->Listener->View Results Tree
Add->Listener->Graph Results
Click on the Play button to run.

Graph Analysis

At the bottom of the graph, there are the following statistics, represented in colors:

  • Black: The total number of current samples sent.
  • Blue: The current average of all samples sent.
  • Red: The current standard deviation.
  • Green: Throughput rate that represents the number of requests per minute the server handled

The Throughput is the most important parameter. It represents the ability of the server to handle heavy load. The higher the Throughput is, the better is the server performance.

The deviation is shown in red - it indicates the deviation from the average. The smaller the better.
https://www.guru99.com/jmeter-performance-testing.html



Analyze the results
https://octoperf.com/blog/2017/10/19/how-to-analyze-jmeter-results/

my thanks to these awesome sites:
https://www.guru99.com/jmeter-element-reference.html
https://octoperf.com/blog/2018/03/29/jmeter-tutorial

Becoming a dramatically better programmer

0

Category :

Table of contents.



my thanks to a great article here:
https://recurse.henrystanley.com/post/better/

Powershell Intro/Basics

1

Category :

Windows PowerShell is Microsoft's task automation framework, consisting of a command-line shell and associated scripting language built on top of .NET Framework.

Perform administrative tasks on both local and remote Windows systems.

PowerShell output is always a .NET object, basically it could be any .NET object whose assembly is loaded into PowerShell including your own .NET objects.

Discoverability

Windows PowerShell makes it easy to discover its features. For example, to find a list of cmdlets that view and change Windows services, type:
Get-Command *-Service
After discovering which cmdlet accomplishes a task, you can learn more about the cmdlet by using the Get-Help cmdlet. For example, to display help about the Get-Service cmdlet, type:
Get-Help Get-Service
Most cmdlets return objects which can be manipulated and then rendered into text for display. To fully understand the output of that cmdlet, pipe its output to the Get-Member cmdlet. For example, the following command displays information about the members of the object output by the Get-Service cmdlet.
Get-Service | Get-Member

Online Help You can get online help for any command using the
Get-Help {command-name} -Online
Get-Help | Get-Service -Online

PowerShell syntax

semi colons

Not needed to terminate statements. You can however use them to separate statements on the command line.

escape character

The backtick (grave accent) represented as the character ` is the escape character within PowerShell. You can use this character to for example print tabs `t or escape characters with a special meaning like the $ character as in `$ or escaping quotes like `”.
You can also use the backtick character to span your statements over multiple lines, which can sometimes come in handy.

variables

Variables in PowerShell have a ‘$’ sign in front of them, like
$myvar = 'some value'
$myothervar = 42

single & double quotes

There actually is a distinction when using these. The single quote represents everything literally, the double quotes interprets what’s inside it.
$dummyvalue = 'dummy'
write 'test$dummyvalue'
write "test$dummyvalue"
The first one will print ‘test$dummyvalue’ the second one will print ‘testdummy’. And like most scripting language you can use the quotes to encapsulate each other like “soo he said ‘wow’, jus he did”.

brackets and colons/periods

The round brackets, parenthesis, ‘(‘ and ‘)’ can be used to specify order and to pass arguments to .NET methods. When calling function you defined in PowerShell you pass the arguments SPACE separated and without parenthesis since they could cause an error.

The square brackets can be used to access list/array members like most languages: The below will produce the output ‘1’ since it’s accessing the first members in the array.
$test = 1,2,3,4,5
write $test[0]
The other use case for these brackets is to define types and accessing .NET classes as in: will print the current date time. Like you can see here you can use a double colon to access properties of a class,
$test = [DateTime]
write $test::now
to access methods you can use the period character
$test = New-Object System.datetime
write $test
write $test.AddYears(2015)

Functions

So there is an important distinctions between calling functions and calling methods. Functions in PowerShell use spaces for the parameters and not brackets, which you might confuse in the beginning when you mix in calls to methods.

CmdLet v Script v Function

Script

A file containing a collection of commands which are executed. A script can contain functions.

Function

Is also a collection of commands which are executed but it must be present in the session to be executed.
Function Say-Hello {
  <#
    .Synopsis
      A brief summary of the function
    .Description
      A full description of the function
    .Parameter Name
      The name parameter
  #>

  Param(
    [String]$Name = "everyone"
  )

  Return "Hello $Name"
}

Cmdlet

A cmdlet is .NET code which performs a specific task without you having to write it out yourself.

my thanks to:

some links to good resources below:
https://diablohorn.com/2016/02/06/powershell-overview-and-introduction/

blog series referenced in previous link above:
https://www.darkoperator.com/powershellbasics/

https://docs.microsoft.com/en-us/powershell/scripting/powershell-scripting?view=powershell-6

Factory Method

0

Category : ,

The Factory completely hides/removes the process of creating objects from the client/caller.

Example

Lets say we have an eCommerce application and we have 2 payment gateways integrated with our application. Lets call these payment gateways BankOne and BankTwo.

Interface

We need a "Payment Gateway" interface to set the template/functionality that all payment gateways need to provide.

interface IPaymentGateway
{
    void MakePayment(Product product);        
}

Concrete implementations

Concrete implementations of Payment Gateways required:
public class BankOne : IPaymentGateway
{
    public void MakePayment(Product product)
    {
        // The bank specific API call to make the payment
        Console.WriteLine("Using bank1 to pay for {0}, amount {1}", product.Name, product.Price);
    }
}

public class BankTwo : IPaymentGateway
{
    public void MakePayment(Product product)
    {
        // The bank specific API call to make the payment
        Console.WriteLine("Using bank2 to pay for {0}, amount {1}", product.Name, product.Price);
    }
}
To be able to identify what payment mechanism has been selected, lets define a simple Enum PaymentMethod.
enum PaymentMethod
{
    BANK_ONE,
    BANK_TWO
}

Factory

Factory class to handle all the details of creating these objects.
public class PaymentGatewayFactory
{
    public virtual IPaymentGateway CreatePaymentGateway(PaymentMethod method, Product product)
    {
        IPaymentGateway gateway = null;

        switch(method)
        {
            case PaymentMethod.BANK_ONE:
                gateway = new BankOne();
                break;
            case PaymentMethod.BANK_TWO:
                gateway = new BankTwo();
                break;
        }

        return gateway;
    }
}
Our factory class accepts the selected payment gateway and then based on the selection it is creating the concrete payment gateway class. We have effectively abstracted out all these details from the client code. Otherwise every class that want to use a payment gateway would have to write all this logic.

Usage

Lets now look at how the client class can use this factory method to make the payment.
public class PaymentProcessor
{
    IPaymentGateway gateway = null;

    public void MakePayment(PaymentMethod method, Product product)
    {
        PaymentGatewayFactory factory = new PaymentGatewayFactory();
        this.gateway = factory.CreatePaymentGateway(method, product);

        this.gateway.MakePayment(product);
    }
}
Now our client class does not depend on the concrete payment gateway classes. It also does not have to worry about the creation logic of the concrete payment gateway classes. All this is nicely abstracted out in the factory class itself.


my thanks to the following article:
https://www.codeproject.com/Articles/874246/Understanding-and-Implementing-Factory-Pattern-i

Windows Containers and Docker

0

Category :

Windows Containers Fundamentals

  1. Containers wrap software up within in a complete file system that contains everything it needs to run: code, runtime, system tools and system libraries.
  2. Always run the same, regardless of the environment.
  3. Applications running in containers can’t interact or see other applications running in the host OS or in other containers.

Virtual Machines Vs Containers

Virtual machine

  1. standalone and has its own operating system, its own applications and its own resources.
  2. Each virtual machine uses its own OS, libraries, etc.
  3. occupy significant amounts of memory.

Containers

  1. do not contain any operating system
  2. take up fewer resources
  3. share the host operating system, including the kernel and libraries, so they don’t need to boot a full OS.

Windows Server Containers Vs Hyper-V Containers

Windows Server Container

  1. based on the Windows Server Core image.
  2. if we trust the code

Hyper-V Container

  1. based on the Windows Nano Server image.
  2. each container runs in a highly-optimized virtual machine, so that they provide a full secure isolation.
  3. kernel of the container host is not shared with other Hyper-V Containers.
  4. if we don’t trust the code

Docker

Windows Server 2016 can’t run Linux containers in Docker format but only Windows containers.

Docker Platform

  • Container Host: Physical or Virtual computer system configured with the Windows Container feature.
  • Container Image: A container image contains the base operating system, application, and all the application dependencies that are needed to quickly deploy a container.
  • Container OS Image: The container OS image is the operating system environment.
  • Container Registry: Container images are stored in a container registry, and can be downloaded on demand. It is a place where container images are published. A registry can be remote or on-premises.
  • Docker Engine: It is the core of the Docker platform. It is a lightweight container runtime that builds and runs your container.
  • Docker file: Docker files are used by developers to build and automate the creation of container images. With a Docker file, the Docker daemon can automatically build a container image.


my thanks to the following:
https://www.red-gate.com/simple-talk/sysadmin/virtualization/working-windows-containers-docker-basics/

Knockout JS Intro

0

Category : ,

Introduction

Knockout is a fast, extensible and simple JavaScript library designed to work with HTML document elements using a clean underlying view model. It helps to create rich and responsive user interfaces. Any section of UI that should update dynamically (e.g., changing depending on the user’s actions or when an external data source changes) with Knockout can be handled more simply and in a maintainable fashion.

Working with Knockout consists of several steps:

  • Get data model:
    In most cases, data will be returned from the remote server in JSON format with AJAX (Asynchronous JavaScript and XML) call.

  • Create View:
    View is a HTML template with Knockout bindings, using “data-bind” attributes. It can contain grids, divs, links, forms, buttons, images and other HTML elements for displaying and editing data.

  • Create View Model:
    View model is a pure-code representation of the data operations on a UI. It can have usual properties and observable properties. An observable property means that when it’s changed in the view model, it will automatically be updated in the UI.

  • Map data from data model to view model:
    In most cases, data in the data model are independent from UI and don’t have a concept of observables. In this step a map from the data model to the view model should be created. It can be done manually or using Knockout mapping plugin.

  • Bind view model to the view:
    When view model is initialized, it can be bound to part of the HTML document, or the whole HTML document.

Data-Bind

An HTML attribute data-bind is used to bind a view model to the view. It is a custom Knockout attribute and is reserved for Knockout bindings. The data-bind attribute value consists of two parts: name and value, separated by a colon. Multiple bindings are separated by a comma.

The binding item name should match a built-in or custom binding handler. The binding item value can be a view model property or any valid JavaScript expression or any valid JavaScript variable:
File Name



Live Examples

http://knockoutjs.com/examples/
http://www.knockmeout.net/2011/08/all-of-knockoutjscom-live-samples-in.html

Tutorials

http://learn.knockoutjs.com/

Documentation

http://knockoutjs.com/documentation/introduction.html

my thanks to the below tutorials/blogs:
https://www.devbridge.com/articles/knockout-a-real-world-example
http://www.knockmeout.net/2011/08/all-of-knockoutjscom-live-samples-in.html

Different Testing Types

0

Category :

  • Unit test: Specify and test one point of the contract of single method of a class. This should have a very narrow and well defined scope. Complex dependencies and interactions to the outside world are stubbed or mocked.

  • Integration test: Test the correct inter-operation of multiple subsystems. There is whole spectrum there, from testing integration between two classes, to testing integration with the production environment.

  • Smoke test (aka Sanity check): A simple integration test where we just check that when the system under test is invoked it returns normally and does not blow up. It is an analogy with electronics, where the first test occurs when powering up a circuit: if it smokes, it's bad.

  • Regression test: A test that was written when a bug was fixed. It ensures that this specific bug will not occur again. The full name is "non-regression test". It can also be a test made prior to changing an application to make sure the application provides the same outcome.

  • Acceptance test: Test that a feature or use case is correctly implemented. It is similar to an integration test, but with a focus on the use case to provide rather than on the components involved.

  • System test: Tests a system as a black box. Dependencies on other systems are often mocked or stubbed during the test (otherwise it would be more of an integration test).

  • Pre-flight check: Tests that are repeated in a production-like environment, to alleviate the 'builds on my machine' syndrome. Often this is realized by doing an acceptance or smoke test in a production like environment

  • Black-box testing: testing only the public interface with no knowledge of how the thing works.

  • Glass-box testing: testing all parts of a thing with full knowledge of how it works.




  • my thanks to the following answers:
    https://stackoverflow.com/questions/520064/what-is-unit-test-integration-test-smoke-test-regression-test?rq=1
    https://stackoverflow.com/questions/437897/what-are-unit-testing-and-integration-testing-and-what-other-types-of-testing-s

    Asynchronous requests with Postman's PM API

    0

    Category : , , ,

    You can send requests asynchronously with the pm API method sendRequest, these can be used in the pre-request or the test script.

    Its important to note that if you send an asynch request in the pre-request tab "The main Postman request will NOT be sent until the pre-request script is determined to be finished with all callbacks, including sendRequest."

    A blog post containing more detailed info on this can be seen on the below link:
    http://blog.getpostman.com/2017/10/03/send-asynchronous-requests-with-postmans-pm-api/

    I have only done basic testing with this but I could get the method to fire using the 2nd example of the 3 available on the previous url:
    var headers = ['reseller_id:' + environment.booking_api_reseller_id];
        headers.push('request_id:'+ environment.booking_api_request_id);
        headers.push('request_authentication:'+ environment.booking_api_request_authentication);
    console.log(headers);
    
    // Example with a full fledged SDK Request
    const echoPostRequest = {
      url: environment.booking_api_host + '/v1/Availability/product/' + environment.booking_api_availability_productKey + '?fromDateTime=' + environment.booking_api_availability_start_date + 
            '&toDateTime=' + environment.booking_api_availability_end_date,
      method: 'GET',
      header: headers,
      body: {
        mode: 'raw',
        raw: JSON.stringify({ key: 'this is json' })
      }
    };
    
    pm.sendRequest(echoPostRequest, function (err, res) {
        console.log('..............here........');
        console.log(err ? err : res.json());
    });
    
    
    I was having issues passing the headers to the request but i found the below url which states the header param should be an array.
    http://www.postmanlabs.com/postman-collection/Request.html#~definition

    Reuseable scripts in Postman

    0

    Category : , , ,

    You can reuse methods across requests in postman.

    Tip #5 in the below list:
    http://blog.getpostman.com/2017/07/28/api-testing-tips-from-a-postman-professional/

    1. Init in Pre-Request or Tests tab or in a previous request.
    2. Store in an Environment or Global variable.
    3. Then call multiple times from other requests.
    1) Setup method in Pre-Request or Tests tab in Postman, you can also list params to pass to method.
    postman.setEnvironmentVariable("commonTests", (responseBody, environmentSchema) => {
        
        // parse response and log
        var responseObject = JSON.parse(responseBody);
        //console.log("response: " + JSON.stringify(responseObject));
    
        // test to check status code
        tests["Status code is 200"] = responseCode.code === 200;
        
        // test response time
        console.log("responseTime: " + responseTime);
        tests["Response time is less than " + environment.max_server_response_time + "ms"] = responseTime < environment.max_server_response_time;
        
        // validate schema
        eval(environment.validateSchema)(responseObject, environmentSchema);
    });
    

    2) Call method from the Pre-Request or Tests tab in Postman.
    You can also call methods from within another method as you can see at the end of the previous code sample.
    
        // validate schema
        eval(environment.commonTests)(responseObject, environment.specificSchema);