10 May 2012 @ 3:56 PM 
Bookmark and Share

As companies become more interested in exposing web services to get data to mobile apps, the question always comes up – “What format should we use for the services?”. It seems like JSON is the default answer anymore – but enterprise companies often have existing investments in XML-based web services, or they’re using some enterprise web services stack, middleware, or ESB that doesn’t have great REST/JSON support. JSON is clearly the lighter format, but I don’t think it should be the default answer without some analysis…

Click here to read the rest of this article on the WMP Blog.

Posted By: admin
Last Edit: 18 Nov 2013 @ 09:04 AM

EmailPermalinkComments (3)
Tags
Tags: , , , , ,
Categories: Uncategorized

 03 May 2012 @ 11:56 AM 
Bookmark and Share

As mobility has exploded onto the list of of top IT priorities over the past few years, many IT shops are realizing that one of the first steps in creating mobile apps that make use of data is figuring out how to properly expose that data via web services. Beyond the standard technical recommendations – there are some common architectural design principles to be aware of when designing and building services – whether they’re internal, enterprise services or public-facing mobile web services.

These principles are well documented throughout the internet, but I found it difficult to find a simple, concise, easily digestible format that wasn’t quickly treading into pie-in-the-sky architecture speak. Careful consideration should be given to these principles early in the design process in order to mitigate the downstream impact of poor architectural decisions. Most of these principles are also beneficial to other areas of object-oriented software development (i.e. loose coupling) – however, the below principles will become readily apparent and relevant when building services.

The six principles are:

  • Loose coupling
  • Autonomy
  • Statelessness
  • Explicit contracts
  • Composability
  • Discoverability

Loose Coupling

Several (autonomy, statelessness) of the six principles also assist in the creation of loosely coupled services. A loosely coupled service requires limited knowledge of the definition or inner workings of other components relied upon by the service. The benefit of loose coupling is the flexibility and agility gained by being able to change the inner workings of a service as needed, without having to make related changes to clients or other components relying upon the service. There are a variety of ways in which services can be tightly coupled – for example: contracts, security, technology, and statefulness.

Coupling via the service contract can occur when the contracts are built against existing logic from back-end systems (i.e. using ORM-generated classes that contain every single database field), which can hinder the evolution of a contract as it has not been designed independently of the underlying logic. Coupling via security or technology can occur when a service is based upon security or communication protocols that limit the adoption of the service – for example, a service built upon an outdated technology (i.e. CORBA or .NET Remoting), or a rarely-used security protocol which may not work across all devices. Coupling via state can occur when service operations must be called in a specific order, coupling the service with the order of operations and session state that must be maintained on the server. Coupling against state can also have a negative impact – for example, a step cannot be added into the middle of a process without updating all service clients.

Autonomy

Service autonomy is a design principle that allows for services to operate more reliably by having maximum control over their execution environment and relying as little as possible on the availability of external resources for which they have no control. Autonomy can be increased by running services on dedicated hardware (reducing dependence on other systems running on the same hardware) or by storing cached copies of data, thus reducing dependence on external databases. Not always possible in every scenario, but good to keep in mind.

Statelessness

Requiring that service operations be called in a specific order increases coupling via an implicit contract – meaning that the order in which the operations should be called is not known and documented by the service itself, but rather must be documented via outside knowledge. Reducing state within services allows for services to be more scalable, as the amount of resources consumed by the service to manage and track state information does not increase with the number of consumers. The most common way to reduce service state is to manage all state at the consumption level – forcing the client to keep track of its own state.

Explicit Contracts

Service consumers should rely only upon a service’s contract to invoke and interact with a service and those interactions should be based solely on the service’s explicit contracts, those which are defined by the service itself, e.g. via a WSDL document – rather than any implicit contracts, for example – external documentation.

Several best practices regarding contracts include:

  • Ensure that the contract definitions remain relatively stable over time to prevent downstream impacts on the service consumers.
  • If the service contracts must be changed, use versioning if possible so that existing consumers are not broken.
  • Avoid exposing private, internal data to consumers – remember the object-oriented design principle of encapsulation. Private implementation details (e.g. primary keys, internal flags, etc.) need not be exposed.
  • Contracts should be designed to be explicit as possible to prevent errors in interpretation.

Composability

Service composability is a design principle which encourages services to be designed in a way that they can be consumed by multiple external systems. This promotes the reusability and agility within a service-oriented environment, as it allows new systems to be built by re-using existing services. Composability is further enabled by several of the other design principles including Autonomy (increases reliability of the service such that it can be used in other systems) and Statelessness (allows the services to be used in conjunction with other services without regard for state).

Discoverability

Discoverability is a design principle that encourages services to be discovered more easily by adding metadata about how the service should be invoked and interacted with, and storing that metadata in a central repository if possible. By making the services more easily discoverable and cataloging the related information, the services are inherently more interoperable and can be re-used more easily. The core consideration for this principle is that the information catalogued for the services needs to be both consistent and meaningful. The information recorded about the services should be accessible by both technical and non-technical users, allowing anyone to evaluate the capabilities of the service and whether or not it should be used.

Service Design Recommendations

While not all of the services design principles can be followed to the letter in every real-world services implementation, the guidelines can generally be applied regardless of the scenario. The following recommendations should be observed in regards to building well-designed services.

  • Limit coupling where possible by limiting the amount of knowledge of inner workings required to consume a service. This allows for more flexibility in consumer systems and the ability to change services more easily.
  • Use common, well-supported message formats such as SOAP/XML or REST/JSON and well-supported security paradigms (i.e. token based authentication) to limit coupling via technology.
  • Use caching where possible within the services layer to speed data retrieval and response times, and to increase service autonomy.
  • Define explicit service and data contracts where possible, and avoid exposing private implementation details – sending only the data necessary to complete a unit of work.
  • When making breaking changes to contracts, consider including a version number in the HTTP headers or with the initial login web service call – if the service client or mobile app is out of date, the user can be alerted and directed to the app store for an app update.
  • Make services stateless whenever possible to reduce coupling, and if not possible – ensure that state is stored external to the service, e.g. in the session store or cache of your web framework.
  • Build systems by re-using existing services when possible to promote service composability and to avoid creating multiple versions of a service method that perform the same business function.
Posted By: admin
Last Edit: 03 May 2012 @ 11:58 AM

EmailPermalinkComments (2)
Tags

 06 Jan 2011 @ 11:06 AM 
Bookmark and Share

Best Practices for Mobile Application Design and Development

When developing mobile applications, there are a number of key challenges where architecture and design are fundamentally different from that of a typical enterprise application. Careful consideration should be given to these mobile architecture issues early in the development process in order to mitigate the downstream impact of poor architectural decisions.

Click here to read the rest of this article on the WMP Blog.

Posted By: admin
Last Edit: 21 Oct 2013 @ 02:04 PM

EmailPermalinkComments (8)
Tags

 17 Sep 2010 @ 8:38 AM 
Bookmark and Share

Have you ever built a search using a SQL LIKE statement, only to have your users complain about functionality? A simple SQL-based search doesn’t handle synonyms, misspellings, prefixes, suffixes, result rankings, weighting, and so on and so forth. Fret no longer, you can spend a little more time and build a “smart” search using Lucene and get all of these features as well as the ability to tweak the search as much as you like.

Lucene.NET is a direct port of the popular open source Java Lucene project. Large companies such as EMC and Cisco have placed bets on Lucene and embedded the library within some of their products. The .NET version is a little bit behind the Java version in terms of features and releases, but by and large the library is very usable. Lucene can be used to index just about any type of content – including files , database records, web pages, and can be used in any number of architectural scenarios – searching in an ASP.NET web site, searching within a desktop app, search as a web service or Windows service, etc.

In the most simple search scenario – architecturally, you have to build an Indexer and a Searcher. You can think of Lucene as a set of tools that will do most of the work for you in building these components – you have to use Lucene to build an index and dump your searchable content into that index, and you have to tell Lucene how to search the index that you’ve built. Conceptually, the index is built out of the content that you want to search, whether it be files or database records. If you change the content you want to search on (for example, you’ve added a new file), then you have to either append that content to your index or rebuild your index. One strategy is to set up a scheduled process (i.e. using Quartz.NET, a windows service, or scheduled task) to periodically re-index your content.

Adding Lucene to your project

First things first, you have to add the Lucene libraries to your project. On the Lucene.NET web site, you’ll see the most recent release builds of Lucene. These are two years old. Do not grab them, they have some bugs. There has not been an official release of Lucene for some time, probably due to resource constraints of the maintainers. Use Subversion (or TortoiseSVN) to browse around and grab the most recently updated Lucene.NET code from the Apache SVN Repository. The solution and projects are Visual Studio 2005 and .NET 2.0, but I upgraded the projects to Visual Studio 2008 without any issues.  I was able to build the solution without any errors. Go to the bin directory, grab the Lucene.Net dll and add it to your project.

Building the Index

Step two is building your searchable index. A Lucene index is usually stored as a set of files on the file system, but can also be stored in memory for performance – and there are even proof of concept projects available that allow you to store the index in a database (though I’m not sure why you would).

A couple of Lucene concepts/classes you should be aware of for indexing include Documents, Fields, Analyzers, and the IndexWriter. Documents are what you put into your index. They’re not “documents” in the traditional sense, like a Word document – rather, a Document is just an abstraction of an indexable piece of content. It is your responsibility to create the Document objects to place into your Index.

For example, let’s say we’re creating a product search, using Product objects pulled from our database. Our searches will be based on the Product Name.

        public class Product

        {

            public Product() { }

 

            public string ProductName { get; set; }

            public decimal Price { get; set; }

            public string Color { get; set; }

            public string Id { get; set; }

 

            //return a Lucene document for the product

            public Document GetDocument()

            {

                Document document = new Document();

                document.Add(new Field("ProductName", this.ProductName, Field.Store.NO, Field.Index.ANALYZED));

                document.Add(new Field("Id", this.Id.ToString(), Field.Store.YES, Field.Index.NO));

                return document;

            }

        }


We’ll add fields to our Document to represent the values we want to search on or store in our Index. Field.Store.YES/NO indicates whether or not we want to actually store the field in our index. Note how I don’t store the Price or Color columns – we don’t want to store the complete objects in Lucene – it’s just our search index. Keep the complete objects stored in your database (or keep your files on the file system, etc.). We do want to store the Id because when we get our search result documents back from querying the index only the stored fields will be returned . We need to at least know the Product Id so we can go fetch our full objects that match our search results from the database. There is also a COMPRESS option that you can use if you need to store large fields or binary data.

Field.Index.ANALYZED/NO indicates whether or not we want to actually index the field. Indexing a field takes some minimal level of processing power, so we don’t want to index every field – only index what you want to search on. Thus we don’t want to Index the Product Id, Color, or Price – only the Name because that’s all we want to search on.

Next, we’ll create the index and add the documents to it. Below is an example of a very simple class with a single method that we can use to build our Product search index using a given list of products.

 

        public class Index

        {

            public void BuildIndex(List<Product> products)

            {

                FSDirectory directory = FSDirectory.Open(new System.IO.DirectoryInfo("C:\\temp\\"));

 

                Analyzer analyzer = new StandardAnalyzer(Lucene.Net.Util.Version.LUCENE_29);

 

                IndexWriter indexWriter = new IndexWriter(directory, analyzer, true, IndexWriter.MaxFieldLength.UNLIMITED);

 

                foreach (Product product in products)

                {

                    indexWriter.AddDocument(product.GetDocument());

                }

 

                indexWriter.Optimize();

                indexWriter.Close();

            }

        }


The FSDirectory is just an abstraction of the storage of the index, and there are “directory” classes that represent in-memory storage, etc. that you can use as well. You can pass a DirectoryInfo object to the Open method to specify where to store the search index.

The Analyzer’s job is to parse, tokenize, and index your data. There are a number of different Analyzers implemented in Lucene, but the StandardAnalyzer is the most straightforward. The StandardAnalyzer will do a few things to your text – including removing junk search terms (aka “stop words”) and punctuation, and normalizing the case of your text. There are a number of constructors available for the StandardAnalyzer, and you can specify your own stop words if you like, but there is a list of common stop words built into Lucene. There is another good analyzer available called the SnowballAnalyzer, which will remove suffixes and prefixes from your text, which can greatly improve your search results. The SnowballAnalyzer is a separate Lucene project that is outside of the main source code, it can be found under the contrib folder in the Lucene source (not in the main Lucene.Net solution) – build it yourself and include it in your project if you would prefer to use it instead of the StandardAnalyzer.

The IndexWriter is responsible for creating the index. The IndexWriter is actually thread safe, and an index can be rebuilt while being read from at the same time without you having to manage the locking of the index files. Lucene takes care of that for you. There is a boolean parameter on the constructor that indicates whether or not to recreate or append to the index. Simply call the AddDocument method on the IndexWriter to write documents to the index. When you’re finished writing documents to the index, you must call the Close method. Optionally, you can call the Optimize method before closing the index which will greatly shrink the size of the index – however, this can take a few seconds sometimes so you may not want to call Optimize if you have indexing performance concerns.

Now that we have the Index built, we can move on to actually searching the index…

Searching the Index

Below is an example method that you could use to search your newly created product search index, you could potentially add it into your Index class. You’ll see a few of the same classes from the indexing sample being used in the search method. As in the previous example, you’ll use the FSDirectory class to specify where the index is located. Then, you’ll need to create an IndexReader, passing in your directory object. The second parameter of the IndexReader specifies whether or not to open the index in read-only mode – for our simple purposes, we only need to read from the index. One thing to note about the IndexReader is that it is fairly expensive to create, so you don’t want to create one every time you’re doing a search in your web application for example. Create a single IndexReader – perhaps in a singleton pattern or by caching the IndexReader object, and re-use that IndexReader. Next, we need an IndexSearcher to actually search our index, fairly straightforward.

When searching, the search queries must be parsed and tokenized in the same way that the data was parsed when it was placed into the index. Due to this, one very important thing to note is that when searching, the same type of Analyzer that was used to create the index must also be used to parse the search queries. If a StandardAnalyzer is used to create the index, a StandardAnalyzer must also be used to parse search queries against the index. The QueryParser actually parses the query text against the field that is going to be searched against – as you can see in the QueryParser constructor, we’ll be searching against the “ProductName” field from our documents. After that, simply call the Parse method on the QueryParser to get the Query that we’ll pass to the searcher. To note, if you want to search on multiple fields – say we wanted to search on the Product Name and the Color, you can use the MultiFieldQueryParser class to query against multiple fields. With the MultiFieldQueryParser, you can even do some clever things like weighting fields differently, i.e. if I wanted product name matches to rank higher than color matches.

Next, we’ll create a collector that will define how the search results are collected from the searcher – we’ll use a TopScoreDocCollector. The first parameter is the maximum number of results, and the second parameter determines whether or not the results are sorted in order of search relevancy. For our purposes, we want to show the customers the best results for their search query so we’ll obviously want our results sorted in order. From there, simply call the Search method on the searcher, passing in the query and the document collector and receive a collection of scored matches based on the search query. For each match, you can call the .Doc method on the searcher to retrieve the actual full Document that was placed in the Index originally. After I’ve collected up the Product IDs from the search result documents, I go back and fetch the full Product objects from the database. Depending on what fields you choose to store in your Lucene index, you may not need to re-fetch what you’re searching for from the database. It’s a good idea to store only enough data to display the search results, that way you don’t need to make a trip to the database just to display your search results.

        public List<Product> SearchProductName(string productName)

        {

            FSDirectory directory = FSDirectory.Open(new System.IO.DirectoryInfo("C:\\temp\\"));

 

            IndexReader reader = IndexReader.Open(directory, true);

 

            Searcher searcher = new IndexSearcher(reader);

 

            Analyzer analyzer = new StandardAnalyzer(Lucene.Net.Util.Version.LUCENE_29);

 

            QueryParser parser = new QueryParser(Lucene.Net.Util.Version.LUCENE_29, "ProductName", analyzer);

 

            Query query = parser.Parse(productName);

 

            TopScoreDocCollector collector = TopScoreDocCollector.create(100, true);

 

            searcher.Search(query, collector);

 

            ScoreDoc[] hits = collector.TopDocs().scoreDocs;

 

            List<int> productIds = new List<int>();

 

            foreach (ScoreDoc scoreDoc in hits)

            {

                //Get the document that represents the search result.

                Document document = searcher.Doc(scoreDoc.doc);

 

                int productId = int.Parse(document.Get("Id"));

 

                //The same document can be returned multiple times within the search results.

                if (!productIds.Contains(productId))

                {

                    productIds.Add(productId);

                }

            }

 

            //Now that we have the product Ids representing our search results, retrieve the products from the database.

            List<Product> products = ProductDAO.GetProductsByIds(productIds);

 

            reader.Close();

            searcher.Close();

            analyzer.Close();

 

            return products;

        }


Again, keep in mind this is only an example method. The examples above are based around searching rows that live in a database, but they could be easily adapted to searching through a directory of files, or searching through indexed web pages. The Lucene class structure, to me seems highly abstracted – this is to allow for ultimate flexibility. Search is a finicky thing and you’ll always run into scenarios where your client doesn’t like the way the search works – that’s fine, because Lucene gives you the flexibility to change how the search works.

Posted By: admin
Last Edit: 03 May 2012 @ 11:58 AM

EmailPermalinkComments (19)
Tags
Tags: ,
Categories: Uncategorized

 28 Feb 2010 @ 1:49 PM 
Bookmark and Share

I was recently trying to ressurect an older project developed in Windows XP, .NET Framework 2.0, Visual Studio 2005, NHibernate, and SQL Server CE 3.1.  I’ve sinced moved to Windows 7 (64-bit) and Visual Studio 2008.

I ran into a surprising number of hurdles while trying to get the application up and running again on 64-bit Windows 7.  I figure I would document this here, just in case anyone else runs into the same issues.

Step 1) Try to build the solution.  Everything builds fine after installing SQL Server Compact Edition.

Step 2) Try to run the application.  Get an exception immediately:

“Could not create the driver from NHibernate.Driver.SqlServerCeDriver.”

InnerException:

“The IDbCommand and IDbConnection implementation in the assembly System.Data.SqlServerCe could not be found. Ensure that the assembly System.Data.SqlServerCe is located in the application directory or in the Global Assembly Cache. If the assembly is in the GAC, use <qualifyAssembly/> element in the application configuration file to specify the full name of the assembly.”

Turns out the issue here is that the System.Data.SqlServerCe dll has to be in the same folder as the application executable.  Pretty easy fix – set Copy Local to ‘True’ on the reference to System.Data.SqlServerCe.

Step 3) Run the application again – now I get a different exception:

“Unable to load DLL ‘sqlceme35.dll’: The specified module could not be found. (Exception from HRESULT: 0x8007007E)”

Turns out the issue with this exception is that SQL Server Compact Edition is built for x86 and has to run in WoW mode on x64 systems.  My solution platform is set to ‘Any CPU’, which worked fine when I was developing on Windows XP.  To fix the issue, go through all of the Visual Studio projects – go to Properties > Build > Platform Target, and set Platform Target to ‘x86′ instead of ‘Any CPU’.

Step 4) Try to run the application again… and I get yet another exception:

“ADOException: cannot open connection” with InnerException of:

“The database file has been created by an earlier version of SQL Server Compact. Please upgrade using SqlCeEngine.Upgrade() method.”

This is kind of annoying – the Visual Studio 2008 Upgrade Wizard changed all my references from SQL Server CE 3.1 to SQL Server 3.5.  How thoughtful.  Unfortunately, I don’t know what the implications of ‘upgrading’ the database are.  Everything worked fine with 3.1 – why introduce any more change to the application?  So, I set the references back to SQL Server CE 3.1 instead of 3.5.

Step 5) Run the application… again.

No exceptions! Everything works with SQL Server 3.1! Upgrade complete.

Posted By: admin
Last Edit: 28 Feb 2010 @ 01:51 PM

EmailPermalinkComments (0)
Tags

 10 Feb 2010 @ 8:44 PM 
Bookmark and Share

One thing that any web developer worth their salt should know is the basics of search engine optimization (SEO).  Much of SEO comes down to basic code-level best practices, and it isn’t terribly difficult to simply bake SEO into your development process when working on public facing web applications.  However, keep in mind that SEO will always be an evolving, fuzzy science, changing on the whim of the indexing strategies of major search engines.  Immediate results are rare, and a long term process should be in place to truly understand the benefit (or detriment) incurred.

I break the concept of SEO down into a few categories that I’ll explain further below…

  1. Content SEO (internal factors)
  2. Strategic SEO (external factors)
  3. Insight and Tracking

Content / Internal SEO

These ‘Content / Internal’ best practices are things that a developer or content creator can bake in during the site development process.  Only a few of these items will make a difference on their own, but as a whole can make an enormous impact.  These basic factors should lay the foundation for any SEO strategy. However, these internal factors absolutely cannot be the only part of your SEO strategy.  Here are a few of the most important ones…

  • Page Titles.  Arguably one of the most important content level factor, this is one of the few that can make an enormous difference on their own.  Your page titles (what goes in the HTML <title> tag) should be relevant to what is on the page whereas I often come across page titles that only contain the name of the site.  Instead, you should have the ‘title’ of the page prefixed or appended to the name of your site.  Some believe that appending the name of your site to the page title is better than prefixing.
  • Page URLs.  This goes hand in hand with your Page Titles, as page URLs carry almost equal important.  The URLs of your pages should mirror closely the titles of your pages, but don’t need to be exact.  Popular opinion is that the closer keywords in your URL are to the end of your domain name, the better.  Search engines have a very ‘human’ behavior in this case… tell me, which URL is more descriptive about this post? – “http://jsprunger.com/search-engine-optimization-101/” or “http://jsprunger.com/?p=88″.  Search engines think the same way.
  • Freshly updated and unique content.  The more your web site content is updated, the more often it will be indexed by search engines.  Sites with freshly updated content seem to get a bonus from most search engines.  Bloggers in particular should ensure that their sites are configured to ‘ping’ a service like Ping-o-Matic whenever you create a new post, this will immediately notify Google and many other services of your new content.  Having unique content is perhaps one of the most important factors, simply rehashing or copying content will get you absolutely nothing from most major search engines – in fact, duplicate content can seriously hurt your rankings.
  • Keyword usage in your content.  Also a highly important factor, whatever keywords that you want to rank for – make sure you’re using them in your content.  Think about what your customers or clients going to search for.  A few guidelines for keyword usage…
    • Don’t overuse your keywords, don’t be spammy.  Find the right balance between keyword usage and having readable, engaging content.
    • Make sure you have your keywords in your page title and URLs.
    • Use keywords within the first 100 words of the page or within HTML headers.
    • Get your keywords used in external links to your site.  More on this later…
  • Image alt tags.  This is a pretty minor SEO factor, but very important if you have any interest in getting results from services like Google Image Search.  The productivity from image search results is usually pretty low for most businesses, but every little bit can help sometimes.  Some web sites (i.e. e-commerce, product catalogs) can benefit from image search much more than others.  Make sure you have descriptive ‘alt’ attributes on your <img> tags – this is a best practice for usability and accessibility in general though.
  • Meta keywords and descriptions.  Long gone are the days of meta tags being useful for SEO.  However, the meta description tag can still play a huge role in your pages getting click-through from the search results. Google will use the meta description of your page as the ‘teaser’ for the search result, but if you’re missing this tag you’ll often just see garbage or irrelevant content for the teaser.  Users are much more likely to click through to your content in search results if the result description is accurate and compelling.
  • Updated Sitemap and sitemap.xml file.  Keeping an up to date listing all of the content on your site in a sitemap will greatly enhance the ability of search engines to properly index 100% of the content on your web site.  You can use a tool like the Google Sitemap Generator to keep a continually updated sitemap file.
  • Avoid so-called ‘black hat’ or any sort of sneaky SEO techniques.  These strategies usually revolve around hiding or cloaking text on your pages in an attempt to fool search engines.   It isn’t worth it – leading search engines can easily detect and adapt to these techniques, resulting in your search rankings taking a dive or even a complete blacklisting of your site.

Strategic / External SEO

Strategic SEO includes all of the factors external to your website that can affect your search engine rankings.  The number one external factor is getting ‘backlinks’ to your content, this is what made Google so ridiculously powerful and accurate – and their rankings are still very much based on the number, diversity, and quality of links to to your site.

Backlinking can be explained with this anecdote: Several years ago you could search for ‘Miserable Failure’ on Google and the number one result was the White House biography page for George Bush.  This was due to a simple viral campaign to get people to put links on their websites, comments, blog posts, etc. linking to the biography page with the anchor text ‘Miserable Failure’.  That’s how backlinks work.  The more external, inbound links to your site, the more ‘authoritative’ your site appears to be in the eyes of major search engines.

But how can you get these backlinks? A few examples…

  • Mainstream media and press releases.  Old fashioned, but if this is relevant to your industry, press releases for important announcements make their way around the internet very quickly.  This obviously works best if the press releases link back to your web site.
  • Getting linked and promoted in blog posts.  Do your friends, colleagues, or business partners have blogs or websites? Ask or barter with them to promote your content, requesting specific keywords be used in links to your web site.  This is a two way street – the more you’re willing to promote content from other sites, the more they’ll be willing to promote you back.  However, popular opinion is that one-way links are deemed to be of higher quality in the eyes of major search engines.
  • Twitter (annoying or ridiculous as many believe it to be) can be a great way to spread the word about your content.  Maybe you’ll get lucky and someone with 30,000 followers will retweet your link if you’ve included the proper hashtags.  After this happens, you’ll start to see your links pop up all over the internet.
  • Social Bookmarking.  Submitting your content to social bookmarking sites like Del.icio.us, Reddit, and Digg or more niche-specific sites can be a great way to spread the word about your content.  These services will also often directly link to your content with the exact text that you’ve specified – bonus!  Don’t be a spammer though, if you have high quality, unique content that people actually want to see – submit it.  If not, don’t bother.
  • Make it easy for your readers to submit your content to social bookmarking sites, for example – drop an AddThis button on your website like the one at the top of this post.  This allows your users to easily link and promote your content if they find it valuable.
  • Targeted submissions.  Do you have niche content? Find targeted venues for submitting your content and articles. For example, you’ve written an article relevant to the Healthcare industry. Track down some Healthcare industry groups on LinkedIn and submit your article to the news sections. Contact industry publications, they’re often happy to include high-quality articles.
  • Alliances and partnerships.  Work with your business partners and allies to cross promote each other where applicable.  For example, you’re a partner for a specific vendor. If you work closely with that vendor, they’re often more than happy to promote their most capable partners by linking them on pages within their own websites.

Insight and Tracking

As mentioned previously, part of SEO includes a process testing out your SEO changes and tracking their effectiveness over time.  A variety of free and paid tools are available to assist you in analyzing your search rankings, search terms, and keyword effectiveness.  Below I’ve listed a few tools that can help.

  • Google Analytics – by far the best free website traffic tracking software that I’ve ever used.  Formerly known as Urchin, Google Analytics allows you to slice, dice, drill down, and report into your tracking data any way you like.  Even better, Google Analytics allows you to configure “goals” for your web site which are basically actionable things that users of your site can perform that are of value to you, the business owner.  For example, submitting a contact form, downloading a white paper, completing a transaction, etc.  Dollar amounts, if applicable, can be tied to goals, allowing you to determine the exact revenue per visitor.  This effectively allows you to determine the most valuable incoming keywords and most effective traffic sources for your web site.  Beyond visitor value, Google Analytics can help you determine many more important statistics.  For example…
    • Most popular content on your website
    • Browser capabilities of your visitors
    • Location and language preferences of your visitors
    • Most popular search terms used to find your web site
    • Tracking of your CPC ad campaigns
    • Tracking visitor loyalty
    • Tracking the top exit pages for your web site (pages where visitors leave)
  • Keyword ranking monitoring and reporting.  There are a variety of free and paid tools that will allow you to continually monitor and report on current and historical keyword rankings for your own website, as well as the keyword rankings for your competitor’s websites.  These tools will allow you to see if you’re making progress on increasing your search rankings.
  • SEO analysis tools such as the Microsoft SEO Toolkit allow you to analyze the your website to check for content level flaws such as broken links and duplicate content that can affect your search engine rankings.  The Microsoft SEO Toolkit allows you to view detailed information about SEO problems on your website using built-in reports and dashboards – an extremely useful tool to use when analyzing the state of SEO on an existing web site.

There is much more to search engine optimization than can be written up in a single blog post (see also: thousands of blogs dedicated purely to the subject).  However, I hope this quick guide to the basics will give you the tools necessary to implement numerous high impact SEO quick wins for a client or personal web site. For web developers, the factors listed above should be kept in mind whenever developing customer-facing websites that could benefit from enhanced search results and search rankings.  Most of the ‘content / internal’ best practices can be easily baked into the development process of almost any e-commerce or content management system implementation project.

Posted By: admin
Last Edit: 10 Feb 2010 @ 08:44 PM

EmailPermalinkComments (10)
Tags

 01 Feb 2010 @ 8:30 AM 
Bookmark and Share

The impact of performance is much more readily apparent in .NET Compact Framework applications.  The mobile devices commonly have a CPU that is 10 times slower than your desktop CPU, and possibly up to 100 times less RAM than a desktop or server.  In Agile or XP development, the mantra is often to ignore performance considerations until necessary – I don’t think you can apply that to .NET CF development or it will really bite you in the end.  You don’t have to go nuts and optimize everything up front, but there are some very important things to keep in mind when developing a Windows Mobile application…

Standard .NET Framework Performance Considerations

Many of the standard .NET Framework performance best practices can become apparent very quickly including…

  • Object Boxing and Unboxing.  Use generics wherever possible and avoid ArrayLists and type conversions.
  • String and StringBuilder.  Need to perform lots of string concatenations? Use a StringBuilder instead of the ‘+’ operator.  When you use the ‘+’ operator, a new string object is created each time you concatenate, increasing memory usage.  The ‘+’ operator is much slower if you’re concatenating a large number of strings.
  • Memory leaks.
    • When doing .NET CF development, if an object implements the Dispose() method – call it when you are finished with the object.
    • One of the most common causes of memory leaks is unhandling events when they’re no longer needed.  If you manually hook up an event with the ‘+=’ operator, ensure you’re unhandling it when finished with the ‘-=’ operator.
    • Pre-allocate collections if possible.  Standard .NET behavior is to automatically double the size of a collection when the upper limit is reached while adding items.  If you know the number of elements that are are going to be in a collection, pre-allocate the size of the collection when instantiating it.
  • Don’t use Exceptions for flow control in an application.  Exceptions are an expensive operation, performance wise.  I’m not saying don’t use exceptions, but don’t use them in areas where you can perform simple checks to prevent them from being thrown.  For example, if you might divide by zero – perform a simple check before the operation occurs rather than handling a DivideByZeroException.  The check is much less expensive than the exception.

.NET Compact Framework-Specific Performance Considerations

However, the .NET Compact Framework is different than the full framework in many ways, leading to a slew of .NET CF specific performance considerations…

  • Avoid making virtual function calls.  They are up to 40% slower than instance and static function calls.  I don’t completely understand the reason for this, but you can read more about it here if you’re interested.
  • There are a few things in .NET CF that are slow because of virtual calls and object boxing/unboxing.  These include:
    • Reflection.  Very slow in .NET CF.
    • XML Deserialization and DataSets.  Extremely slow  because reflection is slow.
  • Avoid creating many copies of Form objects.  Creating a Form is an expensive operation, and unused Form objects are a common cause of memory leak issues. You may want to create your Forms once and cache them in the background for reuse.
  • You can increase the speed of binding data to controls by using the BeginUpdate and EndUpdate methods on a control before and after your data binding occurs.  This will cause the control to not repaint until the binding is finished.
  • Cache expensive resources.  For example, don’t create many different copies of a web service client.  Create a single, cached instance of it that can be used throughout your application.
  • Always test your application on a wide range of physical devices.  If the target device is known, at least test on that device.  Some things seem to perform much betterwhen running on the emulator or when executing unit tests on your desktop environment.
  • This is a more general performance testing best practice, but always test with real data and real quantities of data. This can really bite you on deployment of your application.  I know this from experience – a great example is that deserializing a few hundred objects is MUCH much faster than deserializing 10,000 objects.  In my experience, deserializing 7,000 very simple DTO objects from an ASMX web service was taking up to 20 minutes in some cases.  To alleviate the issue, we ended up switching to a JSON web service, which was much faster to deserialize.
Posted By: admin
Last Edit: 08 Feb 2010 @ 09:34 PM

EmailPermalinkComments (3)
Tags

 30 Jan 2010 @ 1:36 PM 
Bookmark and Share

I’m starting up a short Windows Mobile project again, so I thought it would be a good time to collect some of my best practices for .NET Compact Framework development and post them.  I’m going to break them down into two sections -  usability, and performance best practices (in another post).

Windows Mobile Usability Best Practices

Microsoft has put together a very specific set of guidelines for Windows Mobile usability – the point of this is to get a consistent set of look and feel and application experiences on their platform.  Apple has the same sort of guidelines for iPhone development and it really pays off – most applications have the same consistent look and feel and excellent usability.  Of course, many of these usability guidelines are relevant across many development platforms, but there are some special considerations for mobile development.

Usability is a challenge in mobile development.  Some of the main concerns include…

  • Limited screen real estate.  In Windows Mobile, the most common size is around 480 x 640 pixels.
  • Limited input options.  Touch screen.  Potentially no hardware keyboard.  No mouse, and no scroll wheel.
  • Lighting – Indoor / Outdoor usage.
  • Gloves (i.e. warehouse users)
  • Finger vs. Stylus

Here are some of the most important usability guidelines that Microsoft has set forth…

  • Only display the most relevant information and options on the screen, i.e. don’t clutter up the screen with 100 different rarely used options.  If a feature is rarely used, place it in a menu or submenu.  If a feature or action is used very often, think about assigning it to one of the standard left or right soft keys.
  • Use high contrast, sufficiently bright colors.   Lighting conditions are an important factor in mobile development.  For example, think about if your application could be used in low light or outdoor sunlight conditions.
  • Avoid very small font sizes.  The screen on a mobile device is very small as-is, and actions on a mobile device are often performed at arms length away from the user (in a warehouse, for example).  If a user has to interrupt their workflow to bring the device in front of their face to read the text, then your font is too small.
  • Make the user interface predictable and consistent in your application, keep ‘OK’ and ‘Cancel’ actions in the same location throughout your interface.  The same buttons should perform the same actions throughout your application.  To stay consistent with other Windows Mobile interfaces, one recommendation is to always assign the left soft key to ‘Back’ or ‘Cancel’ actions, and to assign the right soft key to ‘Next’ or ‘OK’ actions.  Another Microsoft recommendation is to avoid overriding the hardware buttons (i.e. the Home button).
  • Ensure your UI elements are appropriately sized.  Buttons sized for a stylus should be at least 21 pixels squared, buttons sized for fingerse should be at least 38 pixels squared.
  • Keep screen rotation in mind – developing to account for rotation is a pain, but very important for consumer applications.  Your options though are limited to either dynamically resizing the content, or to just design for a square screen.
  • Scrolling is discouraged in Windows Mobile applications, because it is kind of a pain for the end user.  Try to keep your content on one screen length/width if possible.
  • If your target devices may feature a keyboard, assign common actions to key shortcuts.  This can greatly increase efficiency for power users.
  • For displaying information, make use of Summary, Detail, and Edit views.  A ‘Summary’ view displays only the most necessary and relevant information about an item.  To access less commonly used information about an item, the user can drill down to a more complete ‘Detail’ view.  If a user needs to edit the information, they can access an ‘Edit’ view.
  • Ensure you’re setting focus on the appropriate text entry fields in bar code scanning scenarios, etc.  If a user is wearing gloves and has to take them off to set focus on a field before they scan a pallet, they’re going to hate your application.
Posted By: admin
Last Edit: 31 Jan 2010 @ 03:49 PM

EmailPermalinkComments (1)
Tags

 13 Dec 2009 @ 5:00 PM 
Bookmark and Share

Over time while using ASP.NET I’ve collected a pretty good handful of best practices that I try to employ on my projects – most of them are things that will simplify the ASP.NET development experience, solutions to common problems, or tips that will just make your life easier.  Most of the best practices are only applicable to WebForms, but some are applicable to ASP.NET MVC as well.

  • Don’t write .NET code directly in your ASPX markup (unless it is for databinding, i.e. Eval statements). If you also have a code behind, this will put your code for a page in more than one place and makes the code less manageable. Put all .NET code in your code-behind.  Things can get complex and difficult to debug very quickly when you’re looking at code executing in two different places.
  • SessionPageStatePersister can be used in conjunction with ViewState to make ViewState useful without increasing page sizes. Overriding the Page’s PageStatePersister with a new SessionPageStatePersister will store all ViewState data in memory, and will only store an encrypted key on the client side.  This will make your pages smaller and download faster if you have a lot of ViewState data for some reason, however it will increase your memory usage on the server – so tread carefully.  See example below for how to use SessionPageStatePersister.
public override PageStatePersister GetStatePersister() {
return new SessionPageStatePersister(this);
}
  • Create a BasePage that your pages can inherit from in order to reuse common code between pages.  Simple object oriented design principles – if you have common functions between pages, like security for example – put it in a base class that inherits from System.Web.Page, and have your pages inherit from that base page.
  • Create a MasterPage for your pages for visual inheritance.  Don’t use ASP server-side includes.  Pages with vastly different visual styles should use a different MasterPage.  Don’t use a Master page for code inheritance.
  • Make use of the ASP.NET Cache in order to cache frequently used information from your database.  Build (or reuse) a generic caching layer that will wrap the ASP.NET Cache.  If you’re loading the same list from the database into a drop down every time a page loads, you should be pulling that list from the cache based on how dynamic it needs to be.
  • Wrap ViewState objects with Properties on your Pages to avoid development mistakes in spelling, etc. when referencing items from the ViewState collection.  For example, you should only have ViewState["key"] once in your page per property.  See example below.
private int SampleId
{
get { return ViewState["SampleId"] == null ? 0 : (int)ViewState["SampleId"]; }

set { ViewState["SampleId"] = value; }
}

  • Avoid putting large objects and object graphs in ViewState, use it mainly for storing IDs or very simple DTO objects.  This is the reason people always complain about huge viewstate – they’re storing something like DataSets in ViewState (terrible idea).  If you stick to small objects with a limited number of properties or just integer IDs, your ViewState data will not be unmanageably large and ViewState is totally usable.
  • Wrap the ASP.NET Session with a SessionManager class to avoid development mistakes in spelling, etc. when referencing items from Session.  Just another way to cut down simple development mistakes.
  • Make extensive use of the applicationSettings key/value configuration values in the web.config – wrap the Configuration.ApplicationSettings with a class that can be used to easily retrieve strongly-typed configuration settings without having to remember the keys from the web.config.  If you have settings, behaviors, etc. that need to change between different deployments of your application, those should be control via settings in the web.config.  For example, we’ll often get requests like ‘We want feature X to go live at the end of the month” – so build, test, and deploy the update ahead of time.  But, add a web.config value that controls whether or not the feature appears i.e. FeatureXEnabled=”False”, on the day of go live just flip it to “True”.
  • Avoid the easiness of setting display properties on your UI controls, instead use CSS styles and classes – this will make your styles more manageable.  Just a general web development best practice.
  • Create UserControls in your application in order to reuse common UI functionality throughout your pages. For example, if a drop down list containing a collection of categories will be used in many places in the site – create a CategoryPicker control that will data bind itself when the page is loaded.  This is my #1 time-saving best practice, yet I’m always surprised how often I see the same drop down list with the same data getting used the same way on 20 different pages – yet the same type-unsafe databinding logic is duplicated 20 times!
  • Use Properties on your UserControls to setup things like default values, different displays between pages, etc. Value type properties can be defined on your UserControls and then be set in your ASP.NET markup by using class level properties on UserControls.  This is a great way to get even more mileage out of reusing your UserControls – watch out for increased complexity of your UserControl logic though.
  • Make use of the ASP.NET validation controls to perform simple validations, or use the CustomValidator to perform complex validations.
  • Create an user-friendly error handling page that can be redirected to when an unhandled exception occurs within your website.  Log any exceptions that come to this page.  The redirection can occur via the Page_Error event in your Base Page, the Application_Error event in your Global.asax, or within the error handling section in the web.config.  Basically, whichever method you pick – make sure you’re not letting any exceptions go unhandled or unlogged!
  • When working with pages that use a highly dynamic data driven display, use the 3rd party (free) DynamicControlsPlaceholder control created by Denis Bauer to simplify the code needed to save the state of dynamically added controls between postbacks.  This little control has saved me countless hours of pain in creating pages with highly dynamic UserControls.  One gotcha – if you use event handling delegates in a UserControl, you have to hook them up on every postback, little messy but not a big deal though usually.  Event handlers are the only “state” that isn’t saved between postbacks if you use this control.
  • Turn ViewState off on controls and UserControls that don’t need it.
Posted By: admin
Last Edit: 14 Dec 2009 @ 06:21 PM

EmailPermalinkComments (11)
Tags
Tags: ,
Categories: Uncategorized

 12 Nov 2009 @ 9:36 PM 
Bookmark and Share

Ran into an interesting problem yesterday where a few months ago we helped a client redesign an ASP.NET web application to fit it into an iframe within their CMS rather than being a standalone site.  Easy enough task.  Testing is completed and site is rolled out.

Now, several months down the road after the application has been iframe’d and in production – one random feature of the application is unexpectedly breaking, but it doesn’t make any sense – the only way the behavior could possibly occur would be that an object retrieved from Session is coming back as null, which turned out to be the case.  The browser was somehow losing the ASP.NET Session cookie.  Furthermore, the feature was working fine in Firefox but not in Internet Explorer, very strange.

The problem was that Internet Explorer will not accept cookies from a page within an iframe where the domain name is different from the top level page.  So, the url of the iframe’d page was www.clientsite1.com and the url of the page hosting the iframe was www.clientsite2.com.

To get around this, you need to add a P3P Compact Policy to your HTTP responses.  P3P is a protocol that allows websites to pass information to the browser regarding their intent to use information collected from the user.  Internet Explorer is the only browser that implements the protocol, and only using it for cookie blocking at that.

To add a P3P in ASP.NET that will allow your cookies to be accepted by the browser from a different domain from within an iframe, add this block of code to your Global.asax.

protected void Application_BeginRequest(object sender, EventArgs e)
{
     HttpContext.Current.Response.AddHeader("p3p","CP=\"IDC DSP COR ADM DEVi TAIi PSA PSD IVAi IVDi CONi HIS OUR IND CNT\"");
}
Posted By: admin
Last Edit: 12 Nov 2009 @ 10:25 PM

EmailPermalinkComments (5)
Tags
Tags:
Categories: Uncategorized




Change Theme...
  • Users » 3
  • Posts/Pages » 16
  • Comments » 87
Change Theme...
  • VoidVoid « Default
  • LifeLife
  • EarthEarth
  • WindWind
  • WaterWater
  • FireFire
  • LightLight

About



    No Child Pages.