CSS Selectors you Must Memorize

0

Category :

1. X Y

li a {
  text-decoration: none;
}
Target the anchors which are within an unordered list? This is specifically when you’d use a descendant selector.

2. X:visited and X:link

a:link { color: red; }
a:visted { color: purple; }
We use the :link pseudo-class to target all anchors tags which have yet to be clicked on.

3. X + Y

ul + p {
   color: red;
}
This is referred to as an adjacent selector. It will select only the element that is immediately preceeded by the former element. In this case, only the first paragraph after each ul will have red text.

4. X > Y    (direct children)

div#container > ul {
  border: 1px solid black;
}

A selector of #container > ul will only target the uls which are direct children of the div with an id of container. It will not target, for instance, the ul that is a child of the first li.
For this reason, there are performance benefits in using the child combinator. In fact, it’s recommended particularly when working with JavaScript-based CSS selector engines.

5. X[title]

a[title] {
   color: green;
}
Referred to as an attributes selector, in our example above, this will only select the anchor tags that have a title attribute.

6. X[href="foo"]

a[href="http://net.tutsplus.com"] {
  color: #1f6053; /* nettuts green */
}
The snippet above will style all anchor tags which link to http://net.tutsplus.com; they’ll receive a branded green color. All other anchor tags will remain unaffected.

7. X[href*="nettuts"]

a[href*="tuts"] {
  color: #1f6053; /* nettuts green */
}
There we go; that’s what we need. The star designates that the proceeding value must appear somewhere in the attribute’s value. That way, this covers nettuts.com, net.tutsplus.com, and even tutsplus.com.

8. X[href^="http"]

a[href^="http"] {
   background: url(path/to/external/icon.png) no-repeat;
   padding-left: 10px;
}
If we want to target all anchor tags that have a href which begins with http, we could use a selector similar to the snippet shown above. This is a cinch with the carat symbol. It’s most commonly used in regular expressions to designate the beginning of a string.

9. X[href$=".jpg"]

a[href$=".jpg"] {
   color: red;
}
Again, we use a regular expressions symbol, $, to refer to the end of a string. In this case, we’re searching for all anchors which link to an image — or at least a url that ends with .jpg. Keep in mind that this certainly won’t work for gifs and pngs.

10. X:checked

input[type=radio]:checked {
   border: 1px solid black;
}
This pseudo class will only target a user interface element that has been checked - like a radio button, or checkbox. It's as simple as that.

11. X:after

The before and after pseudo elements kick butt. Every day, it seems, people are finding new and creative ways to use them effectively. They simply generate content around the selected element.
Many were first introduced to these classes when they encountered the clear-fix hack.

.clearfix:after {
    content: "";
    display: block;
    clear: both;
    visibility: hidden;
    font-size: 0;
    height: 0;
 }

.clearfix {
   *display: inline-block;
   _height: 1%;
}

This hack uses the :after pseudo element to append a space after the element, and then clear it. It's an excellent trick to have in your tool bag, particularly in the cases when the overflow: hidden; method isn't possible.

12. X:hover

div:hover {
  background: #e3e3e3;
}
Oh come on. You know this one. The official term for this is user action pseudo class. It sounds confusing, but it really isn't. Want to apply specific styling when a user hovers over an element? This will get the job done!

13. X:not(selector)

div:not(#container) {
   color: blue;
}
The negation pseudo class is particularly helpful. Let's say I want to select all divs, except for the one which has an id of container. The snippet above will handle that task perfectly.

14. X::pseudoElement

p::first-line {
   font-weight: bold;
   font-size: 1.2em;
}
We can use pseudo elements (designated by ::) to style fragments of an element, such as the first line, or the first letter. Keep in mind that these must be applied to block level elements in order to take effect.

15. X:nth-child(n)

li:nth-child(3) {
   color: red;
}
Remember the days when we had no way to target specific elements in a stack? The nth-child pseudo class solves that!
Please note that nth-child accepts an integer as a parameter, however, this is not zero-based. If you wish to target the second list item, use li:nth-child(2).
We can even use this to select a variable set of children. For example, we could do li:nth-child(4n) to select every fourth list item

16. X:first-child

ul li:first-child {
   border-top: none;
}
This structural pseudo class allows us to target only the first child of the element's parent. You'll often use this to remove borders from the first and last list items.
For example, let's say you have a list of rows, and each one has a border-top and a border-bottom. Well, with that arrangement, the first and last item in that set will look a bit odd.
Many designers apply classes of first and last to compensate for this. Instead, you can use these pseudo classes.

17. X:first-of-type

The first-of-type pseudo class allows you to select the first siblings of its type.

A Test

To better understand this, let's have a test. Copy the following mark-up into your code editor:

<div>
   <p> My paragraph here. </p>
   <ul>
      <li> List Item 1 </li>
      <li> List Item 2 </li>
   </ul>

   <ul>
      <li> List Item 3 </li>
      <li> List Item 4 </li>
   </ul>
</div>

Now, without reading further, try to figure out how to target only "List Item 2". When you've figured it out (or given up), read on.

Solution 1

There are a variety of ways to solve this test. We'll review a handful of them. Let's begin by using first-of-type.

ul:first-of-type > li:nth-child(2) {
   font-weight: bold;
}
This snippet essentially says, "find the first unordered list on the page, then find only the immediate children, which are list items. Next, filter that down to only the second list item in that set.

Solution 2

Another option is to use the adjacent selector.

p + ul li:last-child {
   font-weight: bold;
}
In this scenario, we find the ul that immediately proceeds the p tag, and then find the very last child of the element.

Solution 3

We can be as obnoxious or as playful as we want with these selectors.

ul:first-of-type li:nth-last-child(1) {
   font-weight: bold;
}
This time, we grab the first ul on the page, and then find the very first list item, but starting from the bottom!
:)

 my thanks to:
http://net.tutsplus.com/tutorials/html-css-techniques/the-30-css-selectors-you-must-memorize/?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+nettuts+%28Nettuts%2B%29

Common JavaScript Design Pattern - jQuery.doc.ready

0

Category :

Let me show you an overview, and then look at how it comes together:

function MyScript(){}
(function()
{
  var THIS = this;
  function defined(x)
  {
    return typeof x != 'undefined';
  }
  this.ready = false;
  this.init = function(
  {
    this.ready = true;
  };
  this.doSomething = function()
  {
  };   
  var options = {
      x : 123,
      y : 'abc'
      };
  this.define = function(key, value)
  {
    if(defined(options[key]))
    {
      options[key] = value;
    }
  };
}).apply(MyScript);

As you can see from that sample code, the overall structure is a function literal:
(function()
{
  ...
})();

A function literal is essentially a self-executing scope, equivalent to defining a named function and then calling it immediately:

function doSomething()
{
  ...
}

doSomething();

I originally started using function literals for the sake of encapsulation—any script in any format can be wrapped in that enclosure, and it effectively “seals” it into a private scope, preventing it from conflicting with other scripts in the same scope, or with data in the global scope. The bracket-pair at the very end is what executes the scope, calling it just like any other function.

But if, instead of just calling it globally, the scope is executed using Function.apply, it can be made to execute in a specific, named scope which can then be referenced externally.

So by combining those two together—the creation of a named function, then the execution of a function literal into the scope of the named function—we end up with a single-use object that can form the basis of any script, while simulating the kind of inheritance that’s found in an object-oriented class.

The Beauty Within

By wrapping it up in this way we have a construct that can be associated with any named scope. We can create multiple such constructs, and associate them all with the same scope, and then all of them will share their public data with each other.

But at the same time as sharing public data, each can define its own private data too. Here for example, at the very top of the script:

var THIS = this; 

We’ve created a private variable called THIS which points to the function scope, and can be used within private functions to refer to it

Private functions can be used to provide internal utilities:
function defined(x)
{
  return typeof x != 'undefined';
}

Then we can create public methods and properties, accessible to other instances, and to the outside:
this.ready = false;
this.init = function()
{
  this.ready = true;
};
this.doSomething = function()
{
};

We can also create privileged values—which are private, but publicly definable, in this case via the public define method; its arguments could be further validated according to the needs of the data:

var options = {
  x : 123,
  y : 'abc'
  };
this.define = function(key, value)
{
  if(defined(options[key]))
  {
    options[key] = value;
  }
};

THIS or That?

The enclosing scope of any function can be referred to as this, so when we define a named or anonymous enclosure, this refers to that enclosure at the top level; and it continues to refer to that enclosure from within its public methods.

But within private functions, this refers to the immediate enclosing scope (the private function), not the top-level enclosing scope. So if we want to be able to refer to the top-level scope, we have to create a variable which refers to it from anywhere. That’s the purpose of "THIS":

function MyScript(){}
(function()
{
   var THIS = this;  
   function defined(x)
   {
      alert(this);      //points to defined()
      alert(THIS);      //points to MyScript()
   }
}).apply(MyScript);

Wrapped Up!

All of these features are what makes the construct so useful to me. And it’s all wrapped up in a neat, self-executing singleton —a single-use object that’s easy to refer-to and integrate, and straightforward to use!


my thanks to:
http://blogs.sitepoint.com/2010/11/30/my-favorite-javascript-design-pattern/
http://blogs.sitepoint.com/2010/12/08/the-anatomy-of-a-javascript-design-pattern/

CSS Specificity

0

Category :

If you have two (or more) conflicting CSS rules that point to the same element, there are some basic rules that a browser follows to determine which one is most specific and therefore wins out.
  1. If the selectors are the same then the latest one will always take precedence.
  2. The more specific a selector, the more preference it will be given when it comes to conflicting styles.
  3. The embedded style sheet has a greater specificity than other rules.

Specificity hierarchy

Every selector has its place in the specificity hierarchy. There are four distinct categories which define the specificity level of a given selector:
  1. Inline styles (Presence of style in document).
    An inline style lives within your XHTML document. It is attached directly to the element to be styled. E.g. <h1 style="color: #fff;">
  2. IDs (# of ID selectors)
    ID is an identifier for your page elements, such as #div.
  3. Classes, attributes and pseudo-classes (# of class selectors).
    This group includes .classes, [attributes] and pseudo-classes such as :hover, :focus etc.
  4. Elements and pseudo-elements (# of Element (type) selectors).
    Including for instance :before and :after.
The actual specificity of a group of nested selectors takes some calculating. Basically, you give every id selector ("#whatever") a value of 100, every class selector (".whatever") a value of 10 and every HTML selector ("whatever") a value of 1. Then you add them all up and hey presto, you have the specificity value.
  • p has a specificity of 1 (1 HTML selector)
  • div p has a specificity of 2 (2 HTML selectors; 1+1)
  • .tree has a specificity of 10 (1 class selector)
  • div p.tree has a specificity of 12 (2 HTML selectors and a class selector; 1+1+10)
  • #baobab has a specificity of 100 (1 id selector)
  • body #content .alternative p has a specificity of 112 (HTML selector, id selector, class selector, HTML selector; 1+100+10+1)
So if all of these examples were used, div p.tree (with a specificity of 12) would win out over div p (with a specificity of 2) and body #content .alternative p would win out over all of them, regardless of the order.

What is what


  • A selector is the element that is linked to a particular style. E.g. p in
    p { padding: 10px; }
    



  • A class selector is a selector that uses a defined class (multiple per page). E.g. p.section in
    p.section { padding: 10px; } 




  • An ID selector is a selector that uses an individually assigned identifier (one per page). E.g. p#section in
    #section { padding: 10px; }
    
    (X)HTML: <p id="section">Text</>




  • A contextual selector is a selector that defines a precise cascading order for the rule. E.g. p span in
    p span { font-style: italic; }
    

    defines that all span-elements within a p-element should be styled in italics.





  • An attribute selector matches elements which have a specific attribute or its value. E.g. p span in

    p[title] { font-weight: bold; } 
    

    matches all p-elements which have a title attribute.





  • Pseudo-classes are special classes that are used to define the behavior of HTML elements. They are used to add special effects to some selectors, which are applied automatically in certain states. E.g. :visited in

    a:visited {
    text-decoration: underline; 
    }
    





  • Pseudo-elements provide designers a way to assign style to content that does not exist in the source document. Pseudo-element is a specific, unique part of an element that can be used to generate content “on the fly”, automatic numbering and lists. E.g. :first-line or :after in

    p:first-line {
    font-variant: small-caps; 
    }
    a:link:after { content: " (" attr(href) ")"; }
    




  • My thanks to
    http://htmldog.com/guides/cssadvanced/specificity/
    http://www.smashingmagazine.com/2007/07/27/css-specificity-things-you-should-know/


    Future reading:
    Inheritance
    http://www.smashingmagazine.com/2010/04/07/css-specificity-and-inheritance/

    Naming Conventions

    0

    Category :

    Naming Conventions stuff

    Understanding the Stack Trace

    0

    Category :

    Basically, the Stack Trace is a trace of function calls that go on the
    Stack. When a program runs, it copies functions from the Heap to the Stack,
    in a "Stack" (so to speak) which is a stack of the functions. When a
    function is called, a copy of it is put on the Stack to execute. When a
    function exits, it is pulled from the Stack. If it calls other functions,
    these are stacked on top of it, and each one is pulled off the Stack when it
    exits.

    The Stack Trace shows the "topmost" (latest) functions called. It helps
    identify the chain of execution that led
    up to the current situation
    (usually an exception). It identifies each function on the Stack in the
    order (reversed) in which they appear, with the last one executed at the
    top.

    Sometimes what you do in your code does not throw an exception until it hits
    the .NET Framework components. In these cases, you often have to look down
    the stack until you hit your own functions to determine what actually caused
    the error.

    JavaScript, 5 ways to call a function

    0

    Category :

    JavaScript has functional programming characteristics, and that can get in our way until we decide to face and learn it.
    Let's first create a simple function that we will be using through the rest of this post. This function will just return an array with the current value of this and the two supplied arguments.

    
    

    1) Most common way, unfortunately, global function calls

    When we are learning JavaScript we learn how to define functions using the syntax used in the example above. We learn that it's also very easy to call that function — all we need to do is:

    makeArray('one', 'two');  
       // => [ window, 'one', 'two' ]  
    

    That makeArray function isn't just a loose "global" function, it's a method of the global object. Bringing ourselves back to the browser, the global object is mapped to the window object in this environment.

    I say it's unfortunate that this is the most common way because it leads us to declare our functions globally by default. And we all know that global members are not exactly the best practice in software programming. This is especially true in JavaScript. Avoid globals in JavaScript, you won't regret it.

    JavaScript function invocation rule #1
    In a function called directly without an explicit owner object, like myFunction(), causes the value of this to be the default object (window in the browser).

    2) Method call

    Let's now create a small object and use the makeArray function as one of its methods. We will declare the object using the literal notation. Let's also call this method.

    //creating the object
    var arrayMaker = {
     someProperty: 'some value here',
     make: makeArray
    };
    
    //invoke the make() method
    arrayMaker.make('one', 'two');
    // => [ arrayMaker, 'one', 'two' ]
    // alternative syntax, using square brackets
    arrayMaker['make']('one', 'two');
    // => [ arrayMaker, 'one', 'two' ]
    

    The value of this became the object itself. You may be wondering why isn't it still window since that's how the original function had been defined. Well, that's just the way functions are passed around in JavaScript. Function is a standard data type in JavaScript, an object indeed; you can pass them around and copy them. It's as if the entire function with argument list and body was copied and assigned to make in arrayMaker. It's just like defining arrayMaker like this:


    var arrayMaker = {
     someProperty: 'some value here',
     make: function (arg1, arg2) {
      return [ this, arg1, arg2 ];
     }
    };
    

    JavaScript function invocation rule #2
    In a function called using the method invocation syntax, like obj.myFunction() or obj['myFunction'](), causes the value of this to be obj.

    This is a major source of bugs in event handling code. Look at these examples.

    
    
    
    
    
    


    Clicking the first button will display "btn1" because it's a method invocation and this will be assigned the owner object (the button input element.) Clicking the second button will display "window" because buttonClicked is being called directly (i.e. not like obj.buttonClicked().) This is the same thing that happens when we assign the event handler directly in the element's tag, as we have done for the third button. Clicking the third button does the same of the second button.

    That's another advantage of using a library like jQuery. When defining event handlers in jQuery, the library will take care of overriding the value of this and make sure it contains a reference to the element that was the source of the event.

    //using jQuery
    $('#btn1').click( function() {
     alert( this.id ); // jQuery ensures 'this' will be the button
    });
    

    3) + 4) Two more: apply() and call()

    The more you leverage functions in JavaScript, the more you find yourself passing functions around and needing to invoke them in different contexts. Just like jQuery does in the event handler functions, you'll often need to override the value of this. Remember I told you functions are objects in JavaScript? Functions have predefined methods, two of them are apply() and call(). We can use them to do precisely that kind of overriding.

    var gasGuzzler = { year: 2008, model: 'Dodge Bailout' };
    makeArray.apply( gasGuzzler, [ 'one', 'two' ] );
    // => [ gasGuzzler, 'one' , 'two' ]
    makeArray.call( gasGuzzler,  'one', 'two' );
    // => [ gasGuzzler, 'one' , 'two' ]
    

    The two methods are similar. The first parameter will override this. They differ on the subsequent arguments. Function.apply() takes an array of values that will be passed as arguments to the function and Function.call() takes the same arguments separately. In practice I believe you'll find that apply() is more convenient in most cases.

    JavaScript function invocation rule #3
    If we want to override the value of this without copying the function to another object, we can use myFunction.apply( obj ) or myFunction.call( obj ).

    5) Constructors

    We should be aware that there aren't classes in JavaScript and that any custom type needs a constructor function. It's also a good idea to define the methods of your type using the prototype object, which is a property of the constructor function. Let's create a small type ArrayMaker.

    //declaring the constructor
    function ArrayMaker(arg1, arg2) {
     this.someProperty = 'whatever';
     this.theArray = [ this, arg1, arg2 ];
    }
    // declaring instance methods
    ArrayMaker.prototype = {
     someMethod: function () {
      alert( 'someMethod called');
     },
     getArray: function () {
      return this.theArray;
     }
    };
    
    var am = new ArrayMaker( 'one', 'two' );
    var other = new ArrayMaker( 'first', 'second' );
    
    am.getArray();
    // => [ am, 'one' , 'two' ]
    

    Without the new operator your function will just be called like a global function and those properties that we are creating would be created on the global object (window.) Another issue is that, because you typically don't have an explicit return value in your constructor function, you'll end up assigning undefined to some variable if you forget to use new. For these reasons it's a good convention to name your constructor functions starting with an upper case character. This should serve as a reminder to put the new operator before the call.

    With that taken care of, the code inside the constructor is very similar to any constructor you probably have written in other languages. The value of this will be the new object that you are trying to initialize.

    JavaScript function invocation rule #4
    When used as a constructor, like new MyFunction(), the value of this will be a brand new object provided by the JavaScript runtime. If we don't explictly return anything from that function, this will be considered its return value.

     

    It's a wrap

    I hope understanding the differences between the invocation styles will help you keep bugs out of your JavaScript code. Some of these bugs can be very tricky to identify and making sure you always know what the value of this will be is a good start to avoiding them in the first place.


    My thanks to: http://devlicio.us/blogs/sergio_pereira/archive/2009/02/09/javascript-5-ways-to-call-a-function.aspx

    JavaScript series: http://devlicio.us/blogs/sergio_pereira/archive/tags/JavaScript-Demystified/default.aspx

    C# Field vs Property

    0

    Category : ,

    Property is a function call. Field is not a call - just access into a memory of the class.

    Properties are more maintainable than fields. Some properties do not have the equivalent field - you need to write some code to set/get them.

    Simple example: say you have a car object. It has a mileage and a gallons as fields. You can make a property MPG by dividing these fields. Notice that there is no MPG field inside an object - you do it on the fly by using a property. And that property is read-only - you cannot set it. It can only be changed by changing mileage field or gallons field.

    From the other hand - the critical code path (large loops, for example) should avoid using a lot of properties or get the properties once before the loop.

    Take a look here:
    http://msdn.microsoft.com/en-us/library/w86s7x04(VS.80).aspx

    Inversion of Control and Dependency Injection with Castle Windsor Container

    0

    Category :

    Introduction

    Inversion of Control (IoC) and Dependency Injection (DI) are two related practices in software development which are known to lead to higher testability and maintainability of software products.

    At a glance, these patterns are said to be based on the Hollywood Principle, which states: "don't call us, we'll call you". With a canonical approach, you hard code the classes of the objects you want to instantiate in the source of your application, supply parameters to their constructors and manage their interactions. Each object knows at compile time which are the real classes of the objects they need to interact with, and they will call them directly. So, under this point of view, you and your objects are the ones calling Hollywood. To invert this approach, you need some support from a framework which makes your application smart enough to guess which objects to instantiate, how to instantiate them and, in general, how to control their behavior. Instead of working with concrete classes you'll work with abstractions like interfaces or abstract classes, letting your application decide which concrete classes to use and how to satisfy their dependencies on other components.

    Creating a simple web page scraper

    Following the requirements of the sample application let's write a class capable of satisfying them. It's called HtmlTitleRetriever, and exposes a single method called GetTitle, which accepts the Uri of a file and returns a string with the title of the HTML document -if it has one- or an empty string.
    view sourceprint?
    public class HtmlTitleRetriever
    {
        public string GetTitle(Uri file)
        {
            string fileContents;
            string title = string.Empty;
    
            WebClient client = new WebClient();
            fileContents = client.DownloadString(file);
    
            int openingTagIndex = fileContents.IndexOf("<title>");
            int closingTagIndex = fileContents.IndexOf("</title>");
    
            if(openingTagIndex != -1 && closingTagIndex != -1)
                title = fileContents.Substring(openingTagIndex, 
                    closingTagIndex - openingTagIndex).Substring(7);
    
            return title;
        }
    }
    What this class does is very simple. First it instantiates a WebClient object - a facade to ease use of HttpWebRequest and HttpWebResponse classes. Then it uses the object to retrieve the contents of the remote resource, using the HTTP protocol. Using string routines, it looks for the opening and closing title tags and extracts the text between them.

    At this point you might be wondering what's so wrong in this class to imply the need for a different approach in implementing its requirements. Actually, not much, as long as the requirements remain so simple. But from a more general point of view there are at least two aspects which need to be revisited about this implementation:

    • The class does more than it should do. A principle of good system design is SoC - separation of concerns. According to this principle, a software component should be able to do a simple task only, and do it well. Instead, the class first downloads the file from the web, and then applies some sort of parsing to retrieve the contents it cares about. These are two different tasks, which should be separated into two different components.
    • What if the class needed to be able to retrieve documents not accessible via the HTTP protocol? You'd need to change the implementation of the class to replace or add this feature. The same consideration applies to the parsing process. In this example it doesn't make much sense but you may discover that under certain circumstances adopting a different scraping mechanism would lead to better performance. In other words, the class has deep knowledge - read, dependencies - on concrete implementations of other components. It's better to avoid this because it leads to bad application design.

    Components and Services

    "A component is a small unit of reusable code. It should implement and expose just one service, and do it well. In practical terms, a component is a class that implements a service (interface). The interface is the contract of the service, which creates an abstraction layer so you can replace the service implementation without effort."

    Applying SoC with Components and Services

    So far you've seen that the responsibilities of the HtmlTitleRetriever class can -and should- be separated into two classes: One for retrieving files and one for parsing their contents.
    Note that these are generic jobs, in that they can be implemented in several ways. The implementation above is just one of the available choices, but you can think of retrieving files from other mediums, as well as adopt a different mechanism to extract the contents of the title tag. In other words, these tasks are supplied by a service, which can be carried out in several ways. The concrete classes which perform the task are the components. The file downloading and title scraping service' contracts can be defined via interfaces, IFileDownloader and ITitleScraper.
    public interface IFileDownloader
    {
        string Download(Uri file);
    }
    
    public interface ITitleScraper
    {
        string Scrape(string fileContents);
    }
    
    Now let's implement these services with concrete classes - the components - supplying the same features as the original HtmlTitleRetriever class.

    
    public class HttpFileDownloader : IFileDownloader
    {
        public string Download(Uri file)
        {
            return new WebClient().DownloadString(file);
        }
    }
    
    public class StringParsingTitleScraper : ITitleScraper
    {
        public string Scrape(string fileContents)
        {
            string title = string.Empty;
            int openingTagIndex = fileContents.IndexOf("<title>");
            int closingTagIndex = fileContents.IndexOf("</title>");
    
            if(openingTagIndex != -1 && closingTagIndex != -1)
                title = fileContents.Substring(openingTagIndex, 
                    closingTagIndex - openingTagIndex).Substring(7);
    
            return title;
        }
    }
    
    These components completely satisfy the requirements of the application. Now they need to be assembled to provide the downloading and parsing services together. So let's modify the original awful class to benefit from their features. This time the class mustn't be aware of the concrete implementation of the services. It just needs to know that someone will provide those services and it will simply use them. The new HtmlTitleRetriever class now looks like this:

    
    public class HtmlTitleRetriever
    {
        private readonly IFileDownloader dowloader;
        private readonly ITitleScraper scraper;
    
        public HtmlTitleRetriever(IFileDownloader dowloader, ITitleScraper scraper)
        {
            this.dowloader = dowloader;
            this.scraper = scraper;
        }
    
        public string GetTitle(Uri file)
        {
            string fileContents = dowloader.Download(file);
            return scraper.Scrape(fileContents);
        }
    }
    

    Approaching IoC and DI

    Managing objects creation and disposal using IoC and DI requires actually less magic than I pretended to make you believe. So, who is going to deal with objects if you are no longer in charge for it?

    The main point of a framework which offers IoC and DI is a software component called container. As its name implies, the container will achieve knowledge about components needed by your application to run and will try to be smart enough to understand which component you want. This happens when you query it asking to return an instance of one of the components it contains. This is what IoC means in practice; you'll no longer instantiate classes using constructors, but instead register them into the container and then ask it to give you one instance of a component.

    The other fundamental feature of the container is that it will be able to resolve-and inject-dependencies between your objects; hence the name Dependency Injection. In the sample application, the container will be smart enough to guess that in order to instantiate an HtmlTitleRetriever object, it needs to instantiate components supplying the IFileDownloader and ITitleScraper services.

    Castle Windsor Container

    Even though the MicroKernel provides enough features for this simple example, Windsor Container is usually more suitable for applications which require a more flexible approach to container configuration and a more user friendly API. The snippet below shows how to configure the sample application with Windsor Container.
    
    IWindsorContainer container = new WindsorContainer();
    
    container.AddComponent("HttpFileDownloader", typeof(IFileDownloader),
         typeof(HttpFileDownloader));
    container.AddComponent("StringParsingTitleScraper", typeof(ITitleScraper),
         typeof(StringParsingTitleScraper));
    container.AddComponent("HtmlTitleRetriever", typeof(HtmlTitleRetriever));
    
    HtmlTitleRetriever retriever = container.Resolve<HtmlTitleRetriever>();
    
    string title = retriever.GetTitle(new Uri("some uri..."));
    
    container.Release(retriever);
    

    As you can see, the API is very similar. That's because Windsor is not another container, but it's built on top of the MicroKernel and simply augments its features. One small but useful feature to note is that Windsor lets you retrieve components using generics, thus avoiding casts.
    So far, you've seen how to configure the container programmatically. In a real life application, you would need to write a lot of code for container configuration. Changing something would require a new build of the entire solution. Windsor offers a new feature which lets you configure the container using XML configuration files, much like you would do with a standard .NET application. So let's rewrite the code above to benefit from external configuration.

    First you need to create a configuration file, called App.config or Web.Config, depending on the kind of application you're building. Note that Windsor has the ability to read configuration from other locations as well. The default application configuration file is just one of the options.
    <?xml version="1.0" encoding="utf-8" ?>
    <configuration>
    
      <configSections>
        <section name="castle"
            type="Castle.Windsor.Configuration.AppDomain.CastleSectionHandler, 
              Castle.Windsor" />
      </configSections>
    
      <castle>
        <components>
          <component id="HtmlTitleRetriever"
                     type="WindsorSample.HtmlTitleRetriever, WindsorSample">
          </component>
          <component id="StringParsingTitleScraper"
                     service="WindsorSample.ITitleScraper, WindsorSample"
                     type="WindsorSample.StringParsingTitleScraper,
                          WindsorSample">
          </component>
          <component id="HttpFileDownloader"
                     service="WindsorSample.IFileDownloader, WindsorSample"
                     type="WindsorSample.HttpFileDownloader, WindsorSample">
          </component>
        </components>
      </castle>
      
    </configuration>
    

    First, the section handler for Windsor configuration has to be registered into the configSections section.Then, the actual configuration takes place into the castle section and is very similar to what I did before via code. The syntax is the following:
    • id (required): a string name to identify the component
    • service: the contract implemented by the component
    • type: the concrete type of the component
    Service and type attributes require the full qualified name of the type (namespace.typename) and the assembly name after the comma.

    You may have noticed that the HtmlTitleRetriever class is registered without supplying a service. In fact, it doesn't implement any interfaces nor base class, since it's unlikely that you will ever provide a different implementation of it. The other two components, instead, are concrete implementation of a service that can be carried out in several ways. This syntax lets you register more than one component for the same service. By default, when the container finds that two components have been registered for the same service, it resolves dependencies by supplying the first component registered-either in the configuration file or via code-but this behavior can be changed using the string identification key of the component. Here's how the code of the application changes to take advantage of external configuration:

    
    IWindsorContainer container = new WindsorContainer(new XmlInterpreter());
    HtmlTitleRetriever retriever = container.Resolve<HtmlTitleRetriever>();
    string title = retriever.GetTitle(new Uri("some address..."));
    container.Release(retriever); 

    Taking advantage of IoC and DI

    So far you've seen how to switch from a canonical programming process to inversion of control. Now let's make a small step towards understanding why IoC makes applications better.
    Suppose that the original requirements changed and you needed to retrieve files no longer using the HTTP protocol but instead via FTP. With the former approach you'd need to change the code into the HtmlTitleRetriever class. That's not a lot of work since the example is very simple, but in an enterprise application this may imply a lot of work. Instead, let's see what it takes to provide this feature using Windsor.
    First, you'll need to create a class implementing the IFileDownloader interface which retrieves files via FTP. Then, register it into the configuration file, replacing the former HTTP implementation. So, no need to change a single line of code of the application, and no need for a recompilation since you can provide this new class into a new assembly. Actually, the features provided by Windsor are much smarter than this, but this is topic for another article.

    Summary

    In this article you've seen what Inversion of Control and Dependency Injection are and how they can lead to a better design of a software application. You've seen how to take advantage of them using the open source Windsor Container which comes along with Castle Project.


    My thanks to: http://dotnetslackers.com/articles/designpatterns/InversionOfControlAndDependencyInjectionWithCastleWindsorContainerPart1.aspx

    Log4Net Tutorial in C#

    0

    Category : , ,

    Logging Levels

    There are seven logging levels, five of which can be called in your code. They are as follows (with the highest being at the top of the list):
    1. OFF - nothing gets logged (cannot be called)
    2. FATAL
    3. ERROR
    4. WARN
    5. INFO
    6. DEBUG
    7. ALL - everything gets logged (cannot be called)

    These levels will be used multiple times, both in your code as well as in the config file. There are no set rules on what these levels represent (except the first and last).

    Add the following code to the Assembly.cs file.

    // Configure log4net using the .config file
     [assembly: log4net.Config.XmlConfigurator(Watch = true)]
     // This will cause log4net to look for a configuration file
     // called TestApp.exe.config in the application base
     // directory (i.e. the directory containing TestApp.exe)
     // The config file will be watched for changes.
    

    Add the following section to the web/app.config file in the node:


    
        

    Create a new section in the web/app.config using log4net as the node name:

    
     
       
       
       
      
       
     
     
       
       
     
      
    

    Define a static logger variable at the top of your class. Something like this will work:

    private static readonly ILog log = LogManager.GetLogger(typeof(Program));
    

    Altogether then, your class might look something like this:

    using System;
    using System.Collections.Generic;
    using System.Text;
    using log4net;
    
    namespace log4netDemo
    {
      class Program
      {
       // Define a static logger variable so that it references the name of your class
       private static readonly ILog log = LogManager.GetLogger(typeof(Program));
    
       static void Main(string[] args)
       {
        log.Info("Entering application.");
    
        for (int i = 0; i < 10; i++)
        {
         log.DebugFormat("Inside of the loop (i = {0})", i);
        }
    
        log.Info("Exiting application.");
       }
      }
    }
    

    my thanks to:
    http://www.justinrhinesmith.com/blog/2008/05/12/quick-and-easy-log4net-setup/

    GET v POST..and PUT

    0

    Category :

    Use GET if you don't mind the request being repeated (That is it doesn't change state).

    A RESTful; application will use GETs for operations which are both safe and idempotent. A safe operation is an operation which does not change the data requested. An idempotent operation is one in which the result will be the same no matter how many times you request it. It stands to reason that, as GETs are used for safe operations they are automatically also idempotent. Typically a GET is used for retrieving a resource or collection of resources.

    Advantages of Get:
    • Urls can be bookmarked safely.
    • Pages can be reloaded safely.
    Disadvantages of Get:
    • Variables are pased through url as name-value pairs. (Security risk)
    • Limited number of variables that can be passed. (Based upon browser. IE limited: 2,048 characters.)

     

    Use POST for destructive actions such as creation, editing, and deletion, because you can't hit a POST action in the address bar of your browser.

    A POST would be used for any operation which is neither safe or idempotent. Typically a POST would be used to create a new resource for example creating a NEW question (though in some designs a PUT would be used for this also). If you run the POST twice you would end up creating TWO new questions.
    POST can transmit a larger amount of information and is also more secure than GET, because you aren't sticking information into a URL. And so using GET as the method for an HTML form that collects a password or other sensitive information is not the best idea.

    Advantages of Post:
    • Name-value pairs are not displayed in url. (Security += 1)
    • Unlimited number of name-value pairs can be passed via post. Reference.
    Disadvantages of Post:
    • Page that used post data cannot be bookmark. (If you so desired.)

     

    A Restful app will use PUTs for operations which are not safe but which are idempotent. Typically a PUT is used for editing a resource (editing a question).