What is Optimization?

Sunday, September 21, 2008 | Labels: , , , | 1 comments |

Optimization is the process of finding the best solution (optimal solution) for a given problem. It is generally misunderstood that optimization is synonymous to maximization. The truth is that optimization may represent either maximization or minimization depending upon the type of problem at hand.

For example, the problem of finding a solution that provides the maximum profit for a given set of resources and constraints is a maximization problem. The same problem could be modeled as a minimization problem to find a solution that provides the minimum cost for a given set of resources and constraints.

Optimization can be of two types:
  1. Global optimization and

  2. Local optimization

The global optimum is always unique whereas the local optimum could change for every run of the optimization process depending on the initial solution set. While global optimum is desirable, in some cases, it may not be possible to find the global optimum or to verify whether a given solution is the global optimum. Global optimization also requires more computational power.

Even though various optimization techniques exist, the best technique depends on the type of problem, the domain and the business requirements. The following are some of the well known optimization techniques:
  • Linear Programming

  • Integer Programming

  • Mixed-Integer Programming

  • Constraint Programming

  • Genetic Algorithms

  • Simulated Annealing

  • Hopfield Neural Network

What is Image Thinning?

Monday, May 19, 2008 | Labels: , | 1 comments |

Image thinning is the process of reducing the width of a digitised pattern to just a single pixel so that the topological properties are preserved. The output of thinning is called Skeleton. A skeleton provides an abstraction of the global shape of the object. A skeleton normally requires less storage space compared to the original pattern while it preserves the essential structural information of the pattern.

Neural networks has been successfully applied to image thinning problems and pattern recognition applications like Optical Character Recognition (OCR) and medical imaging applications.

Constraints in Genetic Algorithms (GA)

Monday, April 28, 2008 | Labels: , | 1 comments |

Most of the real-world optimization problems have constraints. Constraints limit the feasible portion of the search space. Constraints are of two types, hard constraints and soft constraints. Hard constraints are constraints that have to be met at any cost, irrespective of the objective function. Soft constraints are more flexible. A minor violation of the soft constraints may be acceptable if it provides a significant gain in the fitness value. If the soft constraints are not met, then certain level of penalty will be imposed.

Different approaches are used to handle constraints in Genetic Algorithms, some of which are explained below:

Remove infeasible solutions: In this approach, infeasible solutions are simply thrown away. This approach has an advantage that there will be no infeasible solutions in the population. Many seem to believe that penalty functions should be harsh so that Genetic Algorithms will avoid the forbidden spaces. The foundation of Genetic Algorithms theory, however, suggests that Genetic Algorithms optimize by combining partial information from the population. Therefore, this approach can result in valuable information being lost as infeasible solutions may still contain fit schema. Another disadvantage is that the algorithm spends much time in evaluation and rejection of infeasible solutions, especially in highly constrained problems.

Repair infeasible solutions: In this approach, the algorithm converts a solution that violates hard constraints into one that does not. This method replaces infeasible solutions with their repaired equivalent. The disadvantage is that the repair strategies have to be problem specific.

Penalize infeasible solutions: Penalty function methods have been the most popular approach in Genetic Algorithms, because of their simplicity and ease of implementation. In this approach, the algorithm transforms constrained optimization into unconstrained optimization. Depending on the problem and the importance of the constraint, the penalty function can be uniform, polynomial, exponential or stepped. The advantage of this approach is that it can consider infeasible solutions. However, the most difficult aspect of the penalty function approach is to find appropriate penalty parameters needed to guide the search towards the constrained optimum.

Selection in Genetic Algorithms

Friday, January 18, 2008 | Labels: , | 0 comments |

We had seen the basic Genetic Algorithms operations in the previous post. In this post, let us see, in detail, how the selection or reproduction operator works.

Selection is the procedure by which candidate solutions are determined for recombination to generate offsprings for the next generation. The chance that a particular candidate solution or string will be selected is based on the string's fitness value.

There are many types of selection operators. For example, a selection operator always selects the fittest individuals and discards the remaining solutions. Depending on the domain, many variants of the selection operators are being used. It is difficult to determine which is better.

The most common selection method used in Genetic Algorithms is the Roulette Wheel Method, which selects the strings statistically based on their relative fitness value, calculated from a fitness function. At the time of offspring creation, a simple spin of the roulette wheel yields the lucky candidate. In Roulette Wheel, highly fit candidate solutions get more chances of being selected for the next generation.

We will see the Roulette Wheel method of selection, in detail, later on.

Basic Genetic Algorithms Operations

Thursday, January 17, 2008 | Labels: , | 1 comments |

As we had seen earlier, Genetic Algorithms (GA) have a solid basis in genetics and evolutionary biological systems. There are basically two kinds of operations in Genetic Algorithms, that are discussed below.

Genetic Operation: The genetic operation mimics the process of heredity of genes to produce new offsprings in each generation.

Evolution Operation: The evolution operation mimics the process of biological evolution to create the next population from generation to generation.

The Genetic operations include crossover and mutation operators while the evolution operation includes the selection or reproduction operation. We will see these three Genetic Algorithms operators in detail in the next post.

Genetic Algorithms: The Iterative Loop

Wednesday, January 16, 2008 | Labels: , | 0 comments |

Genetic Algorithms iteratively generate new solutions from current set of solutions and replace some or all of the existing population with the newly created members. The iteration of the Genetic Algorithms is explained below:

  • An initial population of strings is created randomly.


  • Measure the goodness/ strength of each individual in the population.


  • Select individuals in the parent pool for the creation of next generation.


  • New individuals (Offsprings) are created by performing crossover and/ or mutation on the selected individuals.


  • The new population is tested to see if it satisfies the stopping criteria. If it satisfies, stop the loop; otherwise the next genetic algorithms iteration is performed.

When Should You Use Genetic Algorithms (GA)?

Tuesday, January 15, 2008 | Labels: , | 0 comments |

Genetic Algorithms (GA) are more suitable for the following problems:
  • Domain knowledge is not sufficient enough to narrow the search space. Genetic Algorithms has overcome the main drawback of expert systems, providing rules.


  • Problems where the traditional search methods fail.


  • The search space is vast, complex and poorly understood. Genetic Algorithms are proven to provide robust search in complex search spaces.

What are Genetic Algorithms?

Monday, January 14, 2008 | Labels: , | 0 comments |

Genetic Algorithms are search algorithms that exploit the idea of the survival of the fittest. Genetic Algorithms derive their name since it is modeled after genetics and evolution. Genetic Algorithms have been proven to be robust, flexible and efficient in vast complex spaces.

Artificial Intelligence Techniques

Tuesday, January 8, 2008 | Labels: | 0 comments |

Artificial Intelligence techniques aim at developing intelligent machines, especially smart computer programs. Artificial Intelligence would impart the machines/ programs the capability to mimic human intelligence. The objective of Artificial Intelligence is to make the machines/ programs to keep learning, react to the environment and take decisions on its own without the intervention of humans. The ultimate goal of Artificial Intelligence is to make computer programs to find solutions to problems as well as humans do.

The following are some of the techniques that fall under the category of Artificial Intelligence:

  • Case Based Reasoning
  • Rule Based Reasoning
  • Genetic Algorithms
  • Fuzzy Systems
  • Artificial Neural Networks
  • Intelligent Agents
We will see, in detail, about each of the Artificial Intelligence techniques in our future posts.

CBR Applications

Friday, January 4, 2008 | Labels: | 0 comments |

The following are some applications where Case-Based Reasoning (CBR) has been used:

  • Fault Diagnosis
  • Medical Diagnosis
  • Credit Card Risk Assessment
  • Design of bridges
  • Customer Service Hotline
  • Helpdesk
  • Troubleshooting Applications

When Should You Use Case Based Reasoning (CBR)?

Wednesday, January 2, 2008 | Labels: | 0 comments |

Case Based Reasoning (CBR) is more suitable for the following problems:

  • The problem domain is complex and not amenable to complete mathematical modeling.


  • An explicit model is extremely difficult to elicit and represent with rules.


  • Historical data exists within the organization.


  • The domain experts have considerable difficulty in writing down the decision rules. But, they are comfortable in providing well-proven heuristics and experiences, to incorporate into the case base as cases. They have little difficulty in recalling concrete cases, which they have encountered in the past.


  • The domain is such that it needs reasoning for the system's solutions.

Case Based Reasoning (CBR) vs Artificial Neural Networks (ANN)

Tuesday, January 1, 2008 | Labels: , | 1 comments |

The main similarity between Artificial Neural Networks (ANN) and Case Based Reasoning (CBR) is that both do not need an explicit model. Both CBR and ANN techniques do not have to go through the knowledge-acquisition bottleneck. ANN is essentially a data mining technique, which can work directly from the data.

But, the main criticism against ANN is that it works as a “Black Box”, so they suffer from a lack of transparency. Validity of the systems decision cannot be judged because of the nature of the inner workings, the output of the network is a function of weighted vectors that depends on the network's architecture and the learning mode used. So, it becomes very difficult to use ANN for diagnosis applications, as most of the diagnosis needs an explanation for the result obtained.

ANN are not suitable when background domain knowledge has to be taken into account, whereas in CBR domain knowledge can be incorporated in the form of knowledge-guided clustering. Neural networks cannot cope with complex structures and in order to perform well the coverage of the domain has to be exhaustive during the "learning" phase. CBR does not need an exhaustive coverage of the domain, as cases can be added to the case-base incrementally.