Monday, September 2, 2013

Why companies should move traditional service desks to the cloud?

As cloud tools are grabbing more market share while offering great services it is a good time for companies to reevaluate their investments in desktop service management tools. Switching to the cloud has lots of pros and cons that are already familiar but let’s look at the characteristics that we should be interested in when searching for a service desk in the cloud: 
Technology – technology is changing rapidly and is becoming outdated very fast. This can be one of the most compelling reasons to migrate to a new ITSM solution. Otherwise it means that the technology has not changed with the times and cannot support the service improvement initiatives required by the business. Cloud ITSM solutions have the ability to quickly adopt new technologies without having the risk of unavailability of the service desk.    
Service DeskMaintenance Costs - Many organizations may find that the maintenance cost of their current solution is unacceptable. Most of the cloud providers include the maintenance cost in the subscription fee and they offer their maintenance services almost instantaneously.
Administration Cost - Most ITSM solutions will require some level of administration to change parameters, configure access privileges, configuration of email and SMS notifications; amend service level targets based on priorities or business services etc. With cloud solutions most of this is done by the cloud provider and that can save a lot of cost and time.
Customisation - The chosen ITSM solution should be one that generally delivers the majority of requirements out-of-the-box. If the organization chooses a solution that requires major customization to meet its needs, this is going to cause a major headache and a lot of money. Not only is a full-blown customization costly at the outset, it is also costly to undertake upgrades, as all of the customization will have to be reapplied. This is not the case with cloud ITSM solutions customizations. Most often upgrades are applied unnoticed with little or no  interruption in the normal working.
Version Lock - this is a situation which can occur when the complexity of the path to upgrade to a newer version of an ITSM solution takes so much time and is so costly that it is difficult to justify doing so. This would thus cause the IT support team to be stuck with an old solution and may put in question their ability to give support in an evolving business environment, i.e. the potential benefits of the newer version cannot be made use of. Cloud solution providers offer latest versions to its clients.
Vendor Lock -  Most companies fear a long and difficult migration path to an alternative tool. The time and investment that may be needed to be spent in customization of process flows and integration with other systems, can lead the organization to limit their own migration options. This will push companies to keep the  same old poorly designed processes and restrict the organisation’s opportunities to identify the best solution for their requirements. Changing cloud ITSM providers is easy and clients may not notice the change. Some of them provide data export/import options from other tools
Inflexibility - Another reason for migration to the cloud can be the inflexibility of the desktop solution. You can not rely on a desktop solution in a rapidly changing environment where the needs of  the ITSM industry and the customers are ever increasing. If the ITSM solution is not able to support rapidly changing requirements, the IT team will be unable to respond. Cloud solutions provide this flexibility out of the box, allowing the customer experience to be tailored based on specific needs and preferences.
Version/Solution retiring ‐ Setting a final reason for migration may be due to a vendor retiring its solution or a version of its solution. It can often be that the vendor supports many solutions (not just ITSM) and many versions of those solutions, and in times of financial constraint, may look to reduce the number of solutions and/or versions supported. If a solution is being withdrawn, the organization may be forced into an upgrade to a new version or an alternative solution with the same vendor and within timeframes that they dictate. 
If the above  characteristics made you decide to move your service desk to the cloud you should be focusing on the following steps in order to be able to choose the right provider:
  • Define the number of agents that will work in the service desk
  • Define the functionalities you need - define which functionalities are a must to have and which is good to have but are not necessary
  • Define the needed package - search for the packages available from every vendor suitable to the functionality list
  • Search for prices - notice the price for a package. Pay attention if the price decreases if you pay on yearly basis
  • Use the trial period to carefully evaluate the solution
  • And do not forget that automated processes should follow ITIL best practices.
In my next post I will present 10 ITSM cloud solutions.

Monday, August 12, 2013

Development Testing Maturity Model

Development Testing is a process used in software development and its goal is to reduce software development risks, costs and time. The process applies a broad range of defect prevention and detection strategies to achieve this. Development Testing usually includes a variety of test methodologies like static code analysis, data flow analysis, metrics analysis, peer code reviews, unit testing, code coverage analysis, traceability and other software verification practices.

The main point is that this form of testing is performed by the software developer during the development phase of the software development lifecycle. Development testing does not aim to replace the traditional QA, but rather to complement it. Its purpose is to eliminate development errors before the code reaches the QA phase; and increase the software quality and efficiency of the development and QA process while decreasing the cost of eliminating production errors.

The process of implementing Development Testing starts like any other best practice, standard or framework implementation. Policies that state the organization's expectations for availability, security reliability, performance and regulatory compliance are usually defined. Then, the team should be trained on these policies and after the team is trained, Development Testing practices are implemented to align software development activities with these policies. These Development Testing practices include:
  • Practices that prevent defects through a Deming (plan-do-check-act) approach that promotes reducing the opportunity for error via root cause analysis and improvements.
  • Practices that detect defects immediately after they are introduced— believing that at the moment of finding fixing defects is fastest, easiest and cheapest.
The idea of using different defect prevention and defect detection practices is based on the premise that different Development Testing techniques are developed in order to refer to different types of defects at different points in the software development lifecycle. Therefore applying multiple techniques in overall decreases the risk of defects slipping through to production.
Development Testing Maturity Model
Development Testing Maturity Model
Coverity (software vendor which develops development testing solutions) has developed five-level Development Testing Maturity Model that is based on a set of services and best practices designed to help companies adopt development testing within their organizations. These are the five maturity levels:

LEVEL ONE: Automatic Defect Detection

This level is about detecting and repairing critical issues through automated notifications as part of the software  build process, as well as preventing any new defects from entering the system. 

LEVEL TWO: Identification of Residual Risks

At level two, developers identify residual risks in key components of code that may be shared by different groups or found in their authentication routines. These components are very critical to the heart of the code base and necessitate further automated testing of all different possibilities from a logic standpoint.

LEVEL THREE: Developer Workflow Optimization

The development testing process is integrated with other SDLC systems that developers use on a regular basis, such as source control management and bug tracking. For example, the source control management system is queried for the automated identification of file owners so defects can be automatically assigned to them and tracked. By integrating with other mission-critical systems, the development process becomes more efficient, and it fosters the adoption of the testing platform into the organization.

LEVEL FOUR: Code Governance

At this level software code governance and quality assurance is promoted. It is characterized by establishing and enforcing consistent standards for code quality and security and measuring improvement over time. As part of this stage, policy thresholds are established for items such as the number of high-risk defects. With policies in place, stage gates can be implemented to validate that code is in line with an organization’s governance goals before it is moved to the next stage of the development lifecycle.

LEVEL FIVE: Enterprise Code Assurance

Once users reach level five, developers have mitigated all legacy defects and the build will fail if new defects are introduced. They have established automated test cases for their critical code and code affected by change that secures it against any logic defects.

According to some analysis it is believed that organizations that are committed to the Development Testing Maturity Model are able to develop software 50 percent faster and have reduced development costs by 20 percent.

If planning to implement this maturity model it is wise to do it using the ITIL best practices because a very important aspect in the implementation is policy defining and ITIL can help with that, and also because ITIL can help change related conflicts to be reduced. If you want to find out more about ITIL I recommend reading the following books ITIL Lifecycle Suite 2011 Edition (5 volume set).

Thursday, August 1, 2013

Information Security and Cloud Computing

In my previous post "ITIL, ISO 20000, ISO 27001 what else is there?" I have mentioned 36 different frameworks connected to IT Service Management that can be used in everyday working.
Interestingly none of them explicitly mentions security using cloud services. Not even ITIL , ISO 20000:2011 or ISO 27001:2005. ISO 27001:2013, expected to come out at the end of this year, will try to bring change to this, it will separate two domains and create a new one on Provider Relations to cover cloud area. The framework is called COSO’s Enterprise Risk Management – Integrated Framework. COSO stands for Committee of Sponsoring Organizations of the Treadway Commission and it is American private sector joined initiative dedicated to development of comprehensive frameworks and guidance on enterprise risk management, internal control, and fraud deterrence designed to improve organizational performance and governance and to reduce the extent of fraud in organizations.

What is this framework about? The framework put forth in COSO’s Enterprise Risk Management – Integrated Framework has established a common language and foundation that can be used to construct an effective cloud governance program tailored specifically for a given cloud solution.

In the picture below, the framework is represented as a pathway in which each ERM component (starting with internal environment) is applied in order to understand the specific advantages and disadvantages that a given solution candidate would bring to the organization. When the process is completed for each cloud solution candidate, the ideal cloud solution will emerge along with its related requisites for establishing cloud governance. In cases where a cloud solution has already been implemented, the COSO ERM framework can be used to establish, refine, or perform a quality assurance check of the cloud governance program by ensuring that all major aspects of the program (e.g., objectives, risk assessment, and risk response) have been addressed with respect to management’s requirements.
COSO ERM Framework to Cloud Computing Options
COSO ERM Framework to Cloud Computing Options

The ERM framework components:

  1. Internal Environment – The internal environment component serves as the foundation for and defines the organization’s risk appetite in terms of how risks and controls are viewed. For instance, if management has a policy of not outsourcing any of its operations (i.e., there is a culture of risk avoidance), this policy will limit the viable options for cloud deployment and service delivery models so that private cloud solutions might be the only acceptable alternative.
  2. Objective Setting – Management needs to evaluate how cloud computing aligns with the organization’s objectives. Depending on the circumstances, cloud computing might present an opportunity for the organization to enhance its ability to achieve existing objectives, or it might present an opportunity to gain a competitive advantage, which would require new objectives to be defined.
  3. Event Identification – Management is responsible for identifying the events (either opportunities or risks) that can affect the achievement of objectives. The complexity of event identification and risk assessment processes increases when an organization engages cloud service providers.
  4. Risk Assessment – Management should evaluate the risk events associated with its cloud strategy to determine the potential impact of the risks associated with each cloud computing option. Ideally, risk assessments should be completed before an organization moves to a cloud solution. 
  5. Risk Response – Once risks have been identified and assessed in the context of organizational objectives relative to cloud computing, management needs to determine its risk response. There are four types of risk responses: avoidance, reduction, sharing and acceptance. 
  6. Control Activities – The traditional types of controls –preventive, detective, manual, automated, and entity-level – apply to cloud computing as well. The difference introduced by cloud computing is that some control responsibilities might remain with the organization while certain control responsibilities will be transferred to the CSP. If the quality of an organization’s existing control activities is moderate or poor, going to a cloud solution could exacerbate internal control weaknesses. For example, if an organization with poor password controls or data security practices migrates its computing environment to a public or hybrid cloud solution, the possibility of an external security breach is likely to increase significantly due to the fact that access to the organization’s technology base is now through the public Internet.  
  7. Information and Communication – To effectively operate its business and manage the related risks, management relies on timely and accurate information and communications from various sources regarding external and internal events. With cloud computing, information received from a CSP might not be as timely or of the same quality as information from an internal IT function. As a result, fulfilling management’s information and communications requirements might require additional or different information processes and sources.  
  8. Monitoring – “Risk responses that were once effective may become irrelevant; control activities may become less effective, or no longer be performed; or entity objectives may change.” That statement from 2004 in the COSO’s Enterprise Risk Management – Integrated Framework remains applicable in the age of cloud computing. Management must continue to monitor the effectiveness of its ERM program to verify that the program adequately addresses the relevant risks and facilitates achieving the organization’s objectives. 

What are some of the risks applicable to the cloud environment?

  • Disruptive force - Facilitating innovation (with increased speed) and the cost-savings aspects of cloud computing can themselves be viewed as risk events for some organizations
  • Residing in the same risk ecosystem as the CSP and other tenants of the cloud
  • Lack of transparency
  • Reliability and performance issues
  • Vendor lock-in and lack of application portability or interoperability
  • Security and compliance concerns
  • High-value cyber-attack targets
  • Risk of data leakage
  • IT organizational changes
  • Cloud service provider viability

Before proceeding to the cloud environment  be able to give answers to the following question connected to the data and the cloud provider

  • Which services and related data can be moved safely into the cloud, and when?
  • How will sensitive data be protected in storage, in transit, and in use?
  • How can access to cloud-based data and services through new hard-to-control devices such as smartphones and iPads be managed in line with security requirements?
  • What security levers built into cloud architecture components can be pulled to mitigate new risks?
  • How can companies be sure that cloud service providers are compliant with their security requirements?
  • Are industry-recognized security standards applicable?
  • Will the incremental cost of cloud security potentially offset the commercial benefits?
If you want to read more about the COSO’s Enterprise Risk Management you can download the following paper: COSO ERM Framework

Tuesday, July 23, 2013

The Importance of Green IT

Introduction

Getting a grip on climate change is one of the most important challenges for humanity for the 21st century. It is clear that information and communication technologies (ICT) have a key role in this process as ICT has a very important function in the transformation of our jobs and lives. Information technology is the central nervous system not only in the building of the business sector but also in the forming of governmental and social infrastructures.

Nevertheless the ICT industry is reliant on electrical energy whose availability is limited. With the growth of ICT the dependence of people on it grows too. The irresponsible use of electrical energy directly impacts the financial resources of organisations and inflicts permanent damage on the environment.

The ICT industry consumes up to 8% of the total electrical energy consumed in the European Union and is responsible for up to 2% of the total carbon emissions discharged into the atmosphere, which is same as that which is emitted by the aviation industry.[1] Additionally, recent studies show that on a global scale electrical energy consumed by personal computers increases by 5% every year. On average, electrical energy consumed in small and medium sized firms amounts to 10% of their total IT budget, in extreme cases this can amount to 50%. Nowadays the cost of the electrical energy consumed during the life of a typical computer is greater than the cost of buying it.[2] In addition to the above mentioned facts, there is Moore's Law which states that the number of transistors that can fit onto an integrated circuit doubles roughly every 24 months. This theory in essence explains why electronic waste is the fastest growing type of waste in the world. 

Green IT
Green IT
Due to the above it is clear there is a need for finding ways of implementing Green IT. Green IT is not just focused on the reduction of the IT industry's effect on the environment. It is also focused on the use of ICT in order to assist in the general reduction of organisations' effect on the environment, regardless of the type, form or size of the organisation. Here the term Green IT includes the systematic implementation of the criteria for environmental sustainability (safeguarding against pollution, recycling products, using clean technologies) during design, production, purchasing, operation and disposing of IT infrastructure and the implementation of the same criteria in terms of human and governance components of IT infrastructure. 

Naturally, legislation is a key driver in the implementation of these changes in organisations and the way people live their lives. In many countries there already exists legislation and new laws in regards to the environment as an act of parliament, an example being the United Kingdom. However, the challenges for the implementation and the development of this topic anywhere in the world are huge.

The conceptual foundation of Green IT is based on four most discussed topics in the Green IT area, virtualisation, cloud computing, data center management, e-waste; and the organizational motivation factors supporting the Green IT adoption defined by Alemayehu Molla (2009).

A.    Virtualisation

Virtualisation as a technology is one of the easiest paths to implementing the practices of Green IT. Virtualisation allows for better use of computer systems. More importantly virtualisation can help in the creation and maintenance of energy efficient and ecological data centers. Some of the advantages for the environment resulting from virtualisation are:

  • Costs of electrical energy - a physical server needs the same amount of electrical energy regardless of whether the processor is working with a big or small load.
  • Costs of cooling - having less physical servers which emit heat in data centres reduces the load on the cooling system.
  • Electronic waste - having less physical servers which will need to be replaced means having less electronic waste which companies have to deal with. 

B.    Cloud computing

The use of cloud computing has various direct, indirect and systematic consequences for the surroundings of working in the cloud. 

Direct effects are, of course, the most visible and in this instance are a consequence of a significant decrease in the amount of hardware owned and a greater use of cloud resources. This is because of the fact that such services are centralized by third parties who are able to serve a number of customers simultaneously. A direct result is the reduction of the consumption of electricity by hardware and also electricity for cooling.

Indirect effects of cloud computing are connected to the reduction in CO2 emissions as a result of its implementation and also whilst operating. Companies that use such services are able to focus more on their business as fewer resources will need to be dedicated to maintaining their services and infrastructure.
When talking about systematic consequences there are 3 aspects of efficiency which need to be taken into consideration:

  • the physical location and design of the data centre
  • the architecture of the platform
  • the architecture and access to the development of the applications which are being hosted.

C.    E-waste

Electronic devices have unique characteristics which cause their production and usage to have great impact on the environment and society. This makes managing electronics problematic and challenging. Society and the environment face the following problems:

  • Poor design and aggressive marketing by production companies
  • Electronics contain many toxic substances which make electronic waste toxic
  • Electronics contain many rare and precious materials
  • The majority of electronic waste is improperly discarded and this means that society and the environment pay the price for the poor and toxic design.

D.    Data center management

Data centres have become key elements in the functioning of businesses, academic and governmental institutions and in every day communication. The number of data centres grows as our society and economy changes from paper based into digital. The EPA’s (Environmental Protection Agency)  report from 2007, estimates that the amount of electrical energy consumed by data centres in the USA in the period from 2000 to 2006 has doubled, reaching 61 billion kWh. According to current efficiency trends it is estimated that this number has doubled by 2011, reaching a level of more than 100 billion kWh. In dollar terms this amount is equivalent to 7.4 billion dollars spent on electricity costs. 

When designing green data centres, understanding how much energy was used by the equipment is very important in order to be able to optimize it. For that purpose there is a need to:

  • Have a clear picture how much energy the equipment is using at any given time
  • Decrease the amount of physical infrastructure
  • Install more servers on more powerful energy sources
  • Have a monitoring and reporting platform for energy use
  • Lower the costs of managing a data centre

E.    Organizational motivation factors for Green IT

An organisation is a collective whose behavior is influenced by human motivating factors. In the context of accepting information technology, motives can be defined as a desire which initiates activities of an organisation to accept a specific innovatory system.

The motives can be analysed from the aspect of their locus – origin or focus. The locus of the motivation can be internal or external. The internal refer to the mission, beliefs and system of values of the organisation. The external motives come about from the intervention from government (formal) or market (informal). 

In terms of focus there are more groups, classifies them into two wider categories: techno-economic and socio-political.  Techno-economic motives relate to accepting new technologies and systems for improving the operation of the organisation and the socio-political for accepting specific systems under the influence of outside authority.
Locus and Focus of Green IT Motivation
Locus and Focus of Green IT Motivation
Eco efficiency has an internal locus and economical type of motivation. It relates to the desire for implementing specific practices and technologies to improve eco efficiency of IT while at the same time realising economic aims such as reducing costs.

Eco effectiveness as a motive appears when the organisation initiates Green IT activities as a consequence of their beliefs and system of values connected to eco sustainability and reasons that diverge from economic gain.

Eco responsive as a motive appears as a result of external locus and economic factors. Emphasis is on initiatives that are intended to satisfy a specific demand on the Green IT market.

Eco legitimacy as a motive appears as a result of the political and social pressures that the organizations are facing with. Political pressures are directed by governments and they can take form of regulations, standards, tax. In this case companies decide to implement the Green IT practices only when they are facing this kind of pressures.

References:
[1]  Adrian Sobota, Irenen Sobotaa JohnHotze, Greening IT, How Greener IT Can Form a Solid Base For a Low-Carbon Society,2009, Forward
[2]  Mark G. O’Neill, GREEN IT FOR SUSTAINABLE BUSINESS PRACTICE, An ISEB Foundation Guide, © 2010 British Informatics Society Limited, pg.2, pg 4.


Thursday, July 18, 2013

ITIL described in 3 minutes video


We all have faced the trouble to understand how certain ITIL best practices should be implemented in our working. Incredibly the following video shows that in 3 min. Well just imagine that you are having a dinner in a restaurant and enjoy in the video.




Wednesday, July 17, 2013

ITIL, ISO 20000, ISO 27001 what else is there?

Have you ever wondered how many different best practices and standards connected to IT service management there are? Van Haren Publishing made a list and believe it or not there are more than 36 different standards and frameworks that you can follow in the way you work. This is the list:
  1. Agile 
  2. Amsterdam Information Management Model (AIM)
  3. ArchiMate® 
  4. ASL® 
  5. Balanced Scorecard 
  6. BiSL® 26
  7. CATS CM® 
  8. CMMI® 
  9. COBIT® 
  10. EFQM 
  11. eSCM-CL 
  12. eSCM-SP
  13. Frameworx 
  14. ICB® 
  15. ISO 9001 
  16. ISO 14000 
  17. ISO/IEC 15504 
  18. ISO/IEC 27000 series 
  19. ISO 31000 
  20. ISO 38500 
  21. ISO/IEC 20000 
  22. ITIL® 2011 
  23. Lean management 
  24. M_o_R® 
  25. MoP™ 
  26. MSP® 
  27. OPBOK 
  28. P3O®
  29. PMBOK® Guide
  30. PRINCE2® 
  31. SABSA® 
  32. Scrum 
  33. Six Sigma 
  34. SqEME® 
  35. TMap® NEXT
  36. TOGAF® 
Van Haren has also included a short explanation for all of them. For example, have you ever heard about the EFQM (European Foundation for Quality Management) Excellence Model before? The following description is included:

The basics 

The EFQM Excellence Model is a management framework for helping organizations in their drive towards excellence and increased competitiveness. The EFQM organization does not issue certificates of compliance but runs comprehensive awards and recognition programs for organizations of all sizes and sectors.

Summary 

The EFQM Excellence Model was introduced in 1992 as the framework for assessing organizations for the EFQM Excellence Award and is now the most widely used organizational framework in Europe. It is reviewed every three years; the current version was released in 2010. EFQM, based in Brussels, is 
its custodian.
The Model is a non-prescriptive framework based on nine key criteria (Figure below).  Five criteria are ‘Enablers’ (Leadership, Policy and Strategy, People, Partnership and Resources and Processes) and four are ‘Results’ (Customer Results, People Results, Society Results and Key Performance Results). The ‘Enabler’ criteria cover what an organization does; the ‘Results’ criteria cover what an organization achieves.
The EFQM Excellence Model
The EFQM Excellence Model (Source: EFQM.org)
The EFQM Model’s nine boxes represent the criteria against which to assess an organization’s progress towards excellence. At the heart of the Model lies the logic known as RADAR, which consists of four elements: Results, Approach, Deployment, Assessment and Review. These elements emulate and complete the basic elements of Deming’s Plan, Do, Check, Act cycle by adding specific details that are more comprehensive. 
The Model is based on the premise that excellent results in Performance, Customers, People and Society are achieved through Leadership driving Policy and Strategy, which is delivered through People, Partnerships and Resources. It is used as a basis for self-assessment, an exercise in which an organization grades itself against the nine criteria. This exercise helps organizations to identify current strengths and areas for improvement against strategic goals. This gap analysis then facilitates definition and prioritization of improvement plans to achieve sustainable growth and enhanced performance.

Target audience

People who are coordinating or leading improvement or change programs; people who are providing training, coaching or consultancy in EFQM.

Scope and constraints

The EFQM Excellence Model has an enterprise-wide scope. It takes a holistic view to enable organizations, regardless of size or sector to:

  • Assess where they are, helping them to understand their key strengths and potential gaps in performance 
  • Provide a common vocabulary and way of thinking about the organization that facilitates the effective communication of ideas, both within and outside the organization.
  • Integrate existing and planned initiatives, removing duplication and identifying gaps.
Strengths:
  • It provides a holistic framework that systematically addresses a thorough range of organizational quality issues and also pays attention to impacts through the ‘Results’ criteria. 
  • It provides a clear diagnosis of an organization’s activities and is useful for planning as it links what an organization does and what results it achieves, highlighting how they are achieved. 
  • The Model is relatively difficult to implement. It generates benefits over a longer period of time; an overall organizational strategy on excellence needs to be adopted in order to achieve the benefits.

Relevant links (web links)

Offi cial EFQM website: www.efqm.org
Other useful websites:
www.ink.nl (dutch institute for quality management; runs similar schemes like EFQM).
www.quality.nist.gov (US National Institute of Standards and Technology).

You can download the full list with explanation here: Van Haren list of standard and best practices connected to ITSM

Thursday, July 11, 2013

Provance explains the benefits of combined ITSM and ITAM

In 2010 Provance pioneered the development of process management packs for Microsoft System Center, introducing the first third-party management pack for Microsoft System Center Service Manager. 

Microsoft System Center Service Manager is an integrated management platform that helps companies to easily manage datacenters, client devices, and hybrid cloud IT environments following ITIL® (Information Technology Infrastructure Library) and MOF (Microsoft® Operations Framework) best practice frameworks for IT Service Management.

At Provance they say “Your IT Service Management and IT Asset Management programs are only as good as the information supporting them”. Meaning, that by using the Provance Data Management Pack companies have the ability to easily and automatically track and store data about the assets in specific infrastructure on one platform. This means improved consistency and accuracy of data and at the same time it saves companies time. 

Provance recently published a white paper where the benefits of combined IT Service Management and IT Asset Management are explained from their point of view. They state that even though IT Service Management and IT Asset Management processes are separate disciplines, processes that are most often part of separate areas of companies’ business, they still provide the best results when combined together. Mainly, because they rely on the same data so that the integration of these two promotes benefits in terms of time and cost savings and data accuracy and consistency.

As an example they provide the following workflow (Process for Providing a New Employee with a Computer) as an illustration of the interrelation of these two processes.
Process for Providing a New Employee with a Computer
Source: Provance white paper "The benefits of combined IT Service Management and IT Asset Management"

If you want to find out more about the benefits of combined IT Service Management and IT Asset Management you can read the following white paper.

 

Copyright @ 2013 Wise Guide to ITSM.