Friday, December 24, 2010

Top ERP predictions for 2011 - Computerworld

Top ERP predictions for 2011 - Computerworld

Saturday, December 11, 2010

포스코 “종합 소재기업으로 성장”


출처: http://www.greendaily.co.kr/news/articleView.html?idxno=10909
포스코 “종합 소재기업으로 성장”
2010년 11월 17일 (수)최호 기자  snoop@etnews.co.kr기자의 다른기사 보기
  
▲ 포스코는 17일 인천 송도 글로벌 R&D센터에서 ‘포스코 글로벌 EVI 포럼 2010’을 개최했다. 정준양 포스코 회장이 개회사를 하고 있다.
포스코가 전 세계 고객사 앞에서 새로운 마케팅 혁신 계획을 공개했다.
 
포스코는 17일 인천 송도 글로벌 R&D센터에서 고객사인 글로벌기업 430여개사, 900여명을 초청해 ‘포스코 글로벌 EVI 포럼 2010’을 개최했다.
 
포스코는 이날 모든 기술·제품개발 초기부터 부품업체와 협력하는 EVI(Early Vendor Involvement)전략에서 나아가 전 산업에 걸쳐 고객사에게 제품 및 기술개발을 선 제안해 토털 솔루션을 공급하는 포스코형 EVI(Expanded Value Initiative for customer)라는 독자적인 마케팅 전략을 추진한다고 밝혔다.
 
이에 따라 포스코는 전 수요산업을 △철강 수요 비중이 높은 주요 산업군 △잠재 성장성이 큰 신규 산업군 △대체재·저가재 위협에 따른 프로젝트 산업군으로 구분하고, 적극적인 시장 개척을 위한 경쟁력 있는 기술 개발에 주력하는 포괄적 EVI 활동을 전개할 계획이다.
 
그동안 신닛테쓰·아르셀로미탈 등 글로벌 선진 철강사가 자동차 기업 중심으로 EVI 활동을 추진해 왔다. 가전·조선·에너지·건설·중장비 등 전 사업 고객사를 대상으로 EVI 활동을 하는 것은 포스코가 처음이다.
 
먼저 철강수요 비중이 높은 주요 산업군인 자동차부문에서는 차체 및 부품의 경량화를 추진하고, 신재생에너지·건자재·해양플랜트 등 잠재 성장성이 큰 신규 산업군은 신개념의 풍력타워 및 건설 중장비의 경량화와 기존 소재를 대체할 수 있는 고강도 제품 개발에 주력한다는 계획이다.
 
정준양 회장은 이날 개회사에서 “불확실한 경쟁 환경에서 살아남을 수 있는 길은 공급 체인상의 모든 경영 주체들이 동반성장을 위해 함께 뛰는 것”이라며 “제품과 서비스에 혼을 담아 고객을 섬김으로써 포스코와의 거래 자체가 고객에게 행복을 주고 성공에 디딤돌 역할을 해야 한다”고 강조했다.
 
한편, 이날 행사에는 도요타·소니·엑슨모빌·캐터필러 등 각 산업을 대표하는 글로벌 톱 기업들이 참가했다. 포스코는 이날 국내외 주요 고객사와 장기 소재 공급 및 공동 기술개발 추진 등 30여건의 양해각서(MOU)를 교환하는 성과를 거뒀다. 

“대우인터내셔널이 글로벌 전략 선봉 돼야” | 경향닷컴

“대우인터내셔널이 글로벌 전략 선봉 돼야” | 경향닷컴

Gartner Symposium/ITxpo Webinar Series - Orlando

http://mediazone.brighttalk.com/event/Gartner/27d8d40b22-4312-event

Gartner Symposium/ITxpo Webinar Series - Orlando

  Gartner Keynote: Transitions -- New Realities, Risks & Opportunities 
Monday, 18 October -- 3:30pm EDT
 Current Trends and Future Directions of the IT Industry: Gartner Scenario 2011 
Tuesday, 19 October -- 9:00am EDT
 The 2011 CIO Agenda: Leading in Times of Transition 
Tuesday, 19 October – 1:00pm EDT
 Infrastructure & Operations: Top 10 Trends to Watch 
Wednesday, October 20 – 9:00am EDT
  Top 10 Strategic Technology Trends for 2011 
Wednesday, 20 October – 1:00pm EDT
 Mastermind Interview Series: Marc Benioff 
Thursday, 21 October -- 9:00am EDT
  Mastermind Interview Series: John Chambers 
Thursday, 21 October -- 1:00pm EDT
 Mastermind Interview Series: Steve Ballmer 
Friday, 22 October -- 9:00am EDT

Friday, December 10, 2010

불 꺼지지 않는 포스코센터 8층 ‘소재사업실’

출처: http://www.asiae.co.kr/news/view.htm?idxno=2009100709164445942
[기업이슈] 불 꺼지지 않는 포스코센터 8층 ‘소재사업실’
- “리튬·희토류·원전소재 등 에너지 관련소재 중점”
- 中小기업위한 전략소재 개발로 국가경쟁력 ‘UP’
신근순 기자, 2010-11-05 오후 02:29:58  
 
강남구 대치동에 위치한 포스코센터의 8층은 밤에도 불이 꺼지지 않는다. 8층에 상주하는 30여명의 포스코 소재사업실 직원들 때문이다.

김지용 상무를 중심으로 지난 3월 출범한 소재사업실은 포스코 미래 먹거리사업을 위해 오늘도 불철주야 노력하고 있다.

소재사업실이 하는 일은 간단히 말해 ‘철(鐵)없는 일’이다. 철강소재와 연계된 비철, 마그네슘, 망간, 실리콘, 니켈, 크롬, 리튬, 세라믹, 희토류 등 다양한 소재를 다루고 있다.

이러한 소재들은 철강 제조공정에서 필수적으로 활용되거나 태양광, 연료전지 등 신재생에너지산업에 꼭 필요한 것들이다. 소재산업의 강자인 일본 기업들도 철강→스테인리스→ 티타늄→신소재로 진화하는 양상을 보여 왔다. 이 같은 변화는 철강 제조 시 필요한 자원 확보와 신규 수익원을 창출하기 위한 것이다.

포스코 또한 이러한 이유로 세계적 철강기업에서 기초·혁신소재를 공급하는 ‘글로벌 종합소재 공급기업(Provider)’으로의 변모를 천명하고 나섰다.

포스코가 종합소재기업으로 나설 수 있는 원동력은 견고한 산학연 협력체제 덕분이다. 포스코는 RIST(포항산업과학연구원), 포스텍 등을 통해 수백명의 박사급 전문인력을 확보하고 있으며 다양한 전문분야의 포스코 패밀리를 보유하고 있다. 소재사업실은 이러한 포스코의 역량을 모아 사업화시키는 컨트롤타워 역할을 수행하고 있다.

이들이 담당하고 있는 소재사업은 다양한 비철소재 등을 기반으로 하는 리튬소재, 마그네슘, 원자력 관련 소재, 2차전지 음극소재, 실리콘소재, 녹색 에너지사업 소재 등 다양하다.

포스코가 이들 소재를 선택한 이유는 무엇일까.

소재사업실 정석모 팀리더는 “철강기업인 포스코와 연관된 사업으로 수입대체 효과가 크고 국가경쟁력강화를 위해 필요한 전략소재들을 중점적으로 육성하고 있다”고 설명했다.

특히 ‘세계 4대 소재 강국’을 목표로 지난 9월 시작된 10대 WPM(World Premier Materials) 사업에서 포스코가 친환경 스마트 표면처리강판과 수송기기용 초경량 마그네슘 소재의 총괄주관을 맡고 초고순도 실리콘카본(SiC)소재, 고에너지 이차전지용 전극소재에 세부주관기관으로 참여한 것도 이러한 이유에서다. 
수송기기의 에너지효율이 강조되고 저탄소배출이 강화되는 추세에서 타 소재에 비해 가볍고 튼튼한 마그네슘의 개발은 향후 자동차 시장 선점의 중요한 열쇠가 될 전망이다. 이미 지난 2007년부터 생산을 시작한 포스코 마그네슘판재 사업은 안경테 소재용 판재제품을 첫 출하한 이래 노트북 내장재, 주방용기, 온돌 패널 등을 제품으로 확대해 왔으며 지난 4월 강원도와 마그네슘 제련공장을 건설키로 합의하는 등 사업 추진에 박차를 가하고 있다. 포스코는 향후 WPM을 통해 개발된 마그네슘 소재를 자동차 부품에 적용하는 것을 목표로 삼고 있다.

포스코는 에너지 다소비기업에서 생산기업으로 변모하기 위해 에너지관련 소재 개발에 나서고 있다. 정 팀리더는 “리튬, 희토류, 원전소재 등 고부가가치이고 미래성장성이 높은 소재 개발에 중점을 두고 있다”고 밝혔다.

리튬은 잘 알려진 바와 같이 2차전지와 차세대 핵융합 발전원료로 사용되는 핵심소재다. 포스코는 한국지질자원연구원과 바닷물에서 리튬을 추출하는 기술을 상용화하는 작업에 착수했으며 세계 1위 리튬 매장 국가인 볼리비아와 리튬 소재산업 개발 연구를 위해 컨소시엄을 구성해 참여 중이다. 또한 2차전지의 안정성을 결정하는 음극소재 개발은 철강산업의 부산물인 타르를 이용해 포스코켐텍에서 진행하고 있다.

원전소재의 경우 국산화 90%를 목표로 지난 5월 한국전력과 공동 협력 MOU를 체결한 바 있다. 포스코는 향후 국산화를 통해 원전 2기 건설을 기준으로 연간 2,000억여원의 수입대체 효과를 거두고 글로벌 원전 핵심부품소재기업으로 도약을 목표로 하고 있다.

또한 지난 4월 카자흐스탄 UKTMP와 티타늄 슬래브 생산공장 합작설립 협약을 체결, 전량수입에 의존하던 티타늄 국산화에 이바지하게 됐다. 티타늄은 조선, 담수설비 및 항공기 엔진에 들어가는 고급소재로 특히 원자력발전설비의 필수 소재다.

이처럼 포스코 소재사업실은 지난 8개월 간 바쁘게 뛰어온 결과 이미 많은 성과를 내고 있다. 그러나 지난 국정감사에서 지적된 것과 같이 부품소재산업에 중소기업이 절대 다수를 차지하고 있는 현실에서 대기업들이 이들의 사업영역을 침범하는 것이 아니냐는 우려가 있는 것도 사실이다.

이에 대해 정 팀리더는 “사업을 추진하면서 국내 부품소재 중소기업의 어플리케이션 능력이 부족한 경우를 두루 봐 왔다”며 “포스코는 중소기업이 잘 성장해 서로 윈-윈할 수 있도록 이들 기업에 적합한 소재개발에 노력하고 있다”고 밝혔다.

이는 ‘포스코패밀리’의 정신과 일맥상통한다. 정 팀리더는 “포스코의 소재 국산화를 통해 원료확보에 어려움을 겪던 중소기업들의 문제가 해결할 때마다 뿌듯함을 느낀다”고 소감을 밝혔다. 이 때문에 야근이 생활화되고 국내외 출장이 잦은 것도, 대형 원소주기율표를 사무실에 붙여 놓고 외우며 다른 부서 직원들에게 ‘지구를 사려고(BUY) 그러느냐’는 핀잔을 들어도 웃어넘길 수 있다는 것이다.

정 팀리더는 소재사업은 눈앞에 보이는 경제성만으로 평가할 수 없다고 강조한다. 국내 소재 원천기술 능력이 선진국의 60% 수준에 있어, 완제품 수출이 늘수록 소재강국 일본이 돈을 버는 ‘가마우지 경제’가 지속되고 있다는 것이 그의 지적이다. 그는 “소재사업은 국가산업전반에 걸친 전략적인 관점에서 다뤄져야 하며 포스코는 그동안 쌓아온 역량을 바탕으로 또 한 번 국가 발전 견인을 담당 할 수 있도록 노력하겠다”고 밝혔다.

Gartner Identifies the Top 10 Strategic Technologies for 2011


출처: http://www.gartner.com/it/page.jsp?id=1454221


Gartner Identifies the Top 10 Strategic Technologies for 2011

Analysts Examine Latest Industry Trends During Gartner Symposium/ITxpo, October 17-21, in Orlando
STAMFORD, Conn., October 19, 2010 —  
Gartner, Inc. today highlighted the top 10 technologies and trends that will be strategic for most organizations in 2011. The analysts presented their findings during Gartner Symposium/ITxpo, being held here through October 21.
Gartner defines a strategic technology as one with the potential for significant impact on the enterprise in the next three years. Factors that denote significant impact include a high potential for disruption to IT or the business, the need for a major dollar investment, or the risk of being late to adopt.
A strategic technology may be an existing technology that has matured and/or become suitable for a wider range of uses. It may also be an emerging technology that offers an opportunity for strategic business advantage for early adopters or with potential for significant market disruption in the next five years.   As such, these technologies impact the organization's long-term plans, programs and initiatives.
“Companies should factor these top 10 technologies in their strategic planning process by asking key questions and making deliberate decisions about them during the next two years,” said David Cearley, vice president and distinguished analyst at Gartner.
“Sometimes the decision will be to do nothing with a particular technology,” said Carl Claunch, vice president and distinguished analyst at Gartner. “In other cases, it will be to continue investing in the technology at the current rate. In still other cases, the decision may be to test or more aggressively deploy the technology.”
The top 10 strategic technologies for 2011 include:
Cloud Computing. Cloud computing services exist along a spectrum from open public to closed private. The next three years will see the delivery of a range of cloud service approaches that fall between these two extremes. Vendors will offer packaged private cloud implementations that deliver the vendor's public cloud service technologies (software and/or hardware) and methodologies (i.e., best practices to build and run the service) in a form that can be implemented inside the consumer's enterprise. Many will also offer management services to remotely manage the cloud service implementation. Gartner expects large enterprises to have a dynamic sourcing team in place by 2012 that is responsible for ongoing cloudsourcing decisions and management.
Mobile Applications and Media Tablets. Gartner estimates that by the end of 2010, 1.2 billion people will carry handsets capable of rich, mobile commerce providing an ideal environment for the convergence of mobility and the Web. Mobile devices are becoming computers in their own right, with an astounding amount of processing ability and bandwidth. There are already hundreds of thousands of applications for platforms like the Apple iPhone, in spite of the limited market (only for the one platform) and need for unique coding.
The quality of the experience of applications on these devices, which can apply location, motion and other context in their behavior, is leading customers to interact with companies preferentially through mobile devices. This has lead to a race to push out applications as a competitive tool to improve relationships and gain advantage over competitors whose interfaces are purely browser-based.
Social Communications and Collaboration.  Social media can be divided into: (1) Social networking —social profile management products, such as MySpace, Facebook, LinkedIn and Friendster as well as social networking analysis (SNA) technologies that employ algorithms to understand and utilize human relationships for the discovery of people and expertise. (2) Social collaboration —technologies, such as wikis, blogs, instant messaging, collaborative office, and crowdsourcing. (3)Social publishing —technologies that assist communities in pooling individual content into a usable and community accessible content repository such as YouTube and flickr. (4) Social feedback - gaining feedback and opinion from the community on specific items as witnessed on YouTube, flickr, Digg, Del.icio.us, and Amazon.  Gartner predicts that by 2016, social technologies will be integrated with most business applications. Companies should bring together their social CRM, internal communications and collaboration, and public social site initiatives into a coordinated strategy.
Video.  Video is not a new media form, but its use as a standard media type used in non-media companies is expanding rapidly. Technology trends in digital photography, consumer electronics, the web, social software, unified communications, digital and Internet-based television and mobile computing are all reaching critical tipping points that bring video into the mainstream. Over the next three years Gartner believes that video will become a commonplace content type and interaction model for most users, and by 2013, more than 25 percent of the content that workers see in a day will be dominated by pictures, video or audio.
Next Generation Analytics. Increasing compute capabilities of computers including mobile devices along with improving connectivity are enabling a shift in how businesses support operational decisions. It is becoming possible to run simulations or models to predict the future outcome, rather than to simply provide backward looking data about past interactions, and to do these predictions in real-time to support each individual business action. While this may require significant changes to existing operational and business intelligence infrastructure, the potential exists to unlock significant improvements in business results and other success rates.
Social Analytics. Social analytics describes the process of measuring, analyzing and interpreting the results of interactions and associations among people, topics and ideas. These interactions may occur on social software applications used in the workplace, in internally or externally facing communities or on the social web. Social analytics is an umbrella term that includes a number of specialized analysis techniques such as social filtering, social-network analysis, sentiment analysis and social-media analytics. Social network analysis tools are useful for examining social structure and interdependencies as well as the work patterns of individuals, groups or organizations. Social network analysis involves collecting data from multiple sources, identifying relationships, and evaluating the impact, quality or effectiveness of a relationship.
Context-Aware Computing. Context-aware computing centers on the concept of using information about an end user or object’s environment, activities connections and preferences to improve the quality of interaction with that end user. The end user may be a customer, business partner or employee. A contextually aware system anticipates the user's needs and proactively serves up the most appropriate and customized content, product or service. Gartner predicts that by 2013, more than half of Fortune 500 companies will have context-aware computing initiatives and by 2016, one-third of worldwide mobile consumer marketing will be context-awareness-based.
Storage Class Memory. Gartner sees huge use of flash memory in consumer devices, entertainment equipment and other embedded IT systems. It also offers a new layer of the storage hierarchy in servers and client computers that has key advantages — space, heat, performance and ruggedness among them. Unlike RAM, the main memory in servers and PCs, flash memory is persistent even when power is removed. In that way, it looks more like disk drives where information is placed and must survive power-downs and reboots. Given the cost premium, simply building solid state disk drives from flash will tie up that valuable space on all the data in a file or entire volume, while a new explicitly addressed layer, not part of the file system, permits targeted placement of only the high-leverage items of information that need to experience the mix of performance and persistence available with flash memory.  
Ubiquitous Computing.  The work of Mark Weiser and other researchers at Xerox's PARC paints a picture of the coming third wave of computing where computers are invisibly embedded into the world. As computers proliferate and as everyday objects are given the ability to communicate with RFID tags and their successors, networks will approach and surpass the scale that can be managed in traditional centralized ways. This leads to the important trend of imbuing computing systems into operational technology, whether done as calming technology or explicitly managed and integrated with IT. In addition, it gives us important guidance on what to expect with proliferating personal devices, the effect of consumerization on IT decisions, and the necessary capabilities that will be driven by the pressure of rapid inflation in the number of computers for each person.
Fabric-Based Infrastructure and Computers.  A fabric-based computer is a modular form of computing where a system can be aggregated from separate building-block modules connected over a fabric or switched backplane. In its basic form, a fabric-based computer comprises a separate processor, memory, I/O, and offload modules (GPU, NPU, etc.) that are connected to a switched interconnect and, importantly, the software required to configure and manage the resulting system(s). The fabric-based infrastructure (FBI) model abstracts physical resources — processor cores, network bandwidth and links and storage — into pools of resources that are managed by the Fabric Resource Pool Manager (FRPM), software functionality. The FRPM in turn is driven by the Real Time Infrastructure (RTI) Service Governor software component. An FBI can be supplied by a single vendor or by a group of vendors working closely together, or by an integrator — internal or external.
A video reply of the Top 10 Strategic Technologies presentation will be available via the Gartner Symposium/ITxpo Webinar Series. The webinar series will provide full video replays of the Gartner Symposium/ITxpo keynotes, as well as selected Gartner analyst presentation. More information is available athttp://mediazone.brighttalk.com/event/Gartner/27d8d40b22-4312-intro.
About Gartner Symposium/ITxpoCelebrating its 20th anniversary, Gartner Symposium/ITxpo is the world's most important gathering of CIOs and senior IT executives. This event delivers independent and objective content with the authority and weight of the world's leading IT research and advisory organization, and provides access to the latest solutions from key technology providers. Gartner's annual Symposium/ITxpo events are key components of attendees' annual planning efforts. IT executives rely on Gartner Symposium/ITxpo to gain insight into how their organizations can use IT to address business challenges and improve operational efficiency. Additional information is available at www.gartner.com/symposium/us.
More exclusive content, expanding multi-media coverage, including Twitter feeds and comments from the Gartner Blog Network will be available at Gartner’s SymLive site at http://gartner.com/symlive.
Upcoming dates and locations for Gartner Symposium/ITxpo include:
October 25-27, Tokyo, Japan: www.gartner.com/jp/symposium
November 8-11, Cannes, France: www.gartner.com/eu/symposium
November 16-18, Sydney, Australia: www.gartner.com/au/symposium
Follow GartnerFollow news, photos and video coming from Gartner Symposium/ITxpo on Facebook at http://www.facebook.com/home.php#/Gartner?ref=ts, on Twitter athttp://twitter.com/Gartner_incand using #GartnerSym, on flickr athttp://www.flickr.com/photos/27772229@N07/.

Top Ten ERP Software Predictions for 2011

출처: http://it.toolbox.com/blogs/erp-roi/top-ten-erp-software-predictions-for-2011-42364

Top Ten ERP Predictions for 2011 

  1. Risk management and mitigation. Even though the economy may not be quite as bad as it was at this time last year, companies are still extremely risk adverse. They are not willing to spend millions of dollars on ERP software that are difficult to implement or don't deliver measurable value. When they do go to implement, executives are going to rely on outside consultants and experts to help them manage and minimize risk.
  2. Increasing focus on organizational change management. Since risk management is the name of the game for CIOs, and executives are finally smartening up and realizing that organizational change management is arguably the single best way to mitigate and manage implementation risk. As recently as 2-3 years ago, before the current recession began, companies viewed org change as an optional and nice-to-have implementation activity - now they are realizing that it is critical.
  3. Increasing need for ERP business cases, ROI analysis, and benefits realization. In the latter half of 2010, we saw a marked shift to organizations focusing on clearly defining a business case and conducting an ROI analysis to assess the viability of their ERP initiatives. Given the risk aversion of many companies, this trend is likely to continue into 2011. This quantitative focus has been a core part of Panorama's methodology since its inception, so this is a welcome trend that will ultimately benefit companies.
  4. ERP lawsuits and canceled ERP projects. Despite companies' desires to mitigate risk and focus on organizational change, they are still going to be pressured by slim IT budgets in the new year. This is going to create a conflicting pressure to cut costs in the wrong places, which will ultimately increase the rate of ERP failures. In addition, because of the low tolerance for risk, companies will be faster to pull the plug on troubled projects and file ERP lawsuits against their vendors if needed.
  5. ERP vendors will get their "mojo" back. Up until recent months, most ERP vendors were getting hammered by a mix of increased competition, tight IT budgets, and mediocre financial results. Signs in the latter half of 2010 pointed to increasing IT spending and pent-up demand for enterprise systems, which is likely to continue into 2011. This should give software vendors increased confidence to hold the line on software pricing, invest more in R&D, and provide more product enhancements.
  6. ERP vendor consolidation. Even though ERP vendors as a whole will be stronger in the coming year compared to years past, they all won't be so lucky. As we emerge from the recession, the market will diverge into a class of stronger ERP vendors and a class of weaker players. Look for the stronger players to acquire some of the weaker ones, resulting in a wave of consolidation.
  7. Heavy adoption of Software as a Service (SaaS) models at small and mid-size businesses (SMBs). Assuming SMBs and start-ups lead us out of the economic doldrums as they have in past recessions, they will look to enterprise software to provide their business foundations for growth. However, these bootstrapped start-ups aren't likely to have the capital funds for heavy up-front costs, so they will likely look more to SaaS ERP and CRM systems.
  8. Continued buzz around cloud computing. While SaaS ERP systems are still years away from capturing a significant portion of the ERP market among mid-size to large organizations, CIOs will continue to look at other cloud computing options. For example, hosted ERP solutions and outsourced IT infrastructures will likely be on the minds of many CIOs. In addition, although larger companies may not yet be in a position to adopt enterprise-wide SaaS models, they will continue to evaluate targeted SaaS solutions, such as Document Management Systems (DMS), Human Resource Systems (HRM/HCM), Product Lifecycle Management (PLM), and Customer Relationship Management (CRM).
  9. A good year for CRM software. Most companies have cut their operating and labor costs to the bone throughout the recession. Most are also starting to realize that the only way to make it out of the recession stronger is to fuel top-line growth and sales, and most will do so without hiring too many new sales and customer service reps. For this reason, companies will look to CRM software and social CRM applications to help makes their existing sales and customer service functions more effective and efficient.
  10. More focus on diagnostics, analytics, and business intelligence. Companies have reduced their margins of error for missteps during the recession, so companies will continue to rely on their ERP systems to help provide operational data to help make better and more informed decisions. Look for diagnostics, analytics, and business intelligence applications to gain momentum in the coming year.

What does this all mean to our clients and other companies considering ERP investments in the coming year? The companies that choose the right ERP software for their organizations, best manage business and organizational risk, implement effectively, and position themselves for benefits realization will be better positioned as they head into the new year. This will require companies to more effectively assess vendor viability during their ERP selection processes and leverage ERP implementation best practices more than they have in the past. 

Panorama continues to provide tools, expertise, and resources to those wanting to effectively navigate their ERP system challenges in the new year. Visit our resource center to download industry reports, white papers, benchmark metrics, and other useful information related to ERP selection and implementation best practices. 

Happy holidays and here's to a prosperous and successful 2011!

Top 10 ERP Software Predictions for 2010




A new decade is upon us and the ERP software industry looks quite different than it did at the start of the decade. Ten years ago, the enterprise software space was booming, IT budgets were flush, and companies were replacing systems left and right in preparation for Y2K. 

In contrast, the decade closes with depressed IT spending levels, revenue contraction among many ERP vendors, and uncertainty about the future. However, there are several things to be optimistic about. Here are our ten predictions for the enterprise software space in 2010: 

1. Diligent focus on ERP benefits realization and ROI. Long gone are the days of spending like it's 1999 and hoping for the best. CIOs and COOs will continue to face pressure to prove that every dime of investment in ERP systems is justified and generates a solid return on investment. Look for more deliberate spending, more phased rollouts, buying licenses only as they're needed, and hesitancy to invest in more expensive advanced enterprise software modules. 

2. SMBs to get back into the ERP software market. The bright spot in any recovering economy is usually small business (SMBs). As the economy emerges from the recession, SMBs will look for small business software to automate their operations and scale for growth. In addition, large software vendors such as SAP and Oracle will continue to focus on the SMB market to reinvigorate their revenue growth in software license sales. 

3. Increased adoption of Software as a Service (SaaS) at SMBs. While SMBs may lead the charge in their small business software investments, it may be difficult for them to make the necessary investments. Given that tight credit markets will likely continue into the new decade, many SMBs will look to SaaS enterprise software to help them minimize up front capital IT costs. 

4. Lots of SaaS talk, but not as much action at large organizations. Larger companies, on the other hand, are likely to consider SaaS options, but are much less likely than their SMB counterparts to commit to these deployment models. As software vendors expand hybrid solutions combining the benefits of SaaS with the flexibility of traditional ERP (e.g. Oracle's On Demand and SAP's Business By Design offerings), larger organizations will continue opting for non-SaaS options that more commonly reduce cost and risk while maximizing business benefits in the long-term. They will, however, be more inclined to leverage SaaS for some niche functions, such as Document Management Systems (DMS), Human Resource Systems (HRM/HCM), Product Lifecycle Management (PLM), and Customer Relationship Management (CRM). 

5. Increasing focus on organizational change management and benefits realization. As demonstrated by the exponential growth in Panorama's organizational change management practice, companies are directing much of their ERP software investments to areas that ensure they implement effectively and get more out of their existing enterprise investments. The need to more effectively manage organizational and business risk will likely result in a continuation of this trend in 2010. 

6. It's still a buyers' market. Even in the most optimistic scenario, overall 2010 enterprise software spending will not return to pre-recession levels. This means ERP software buyers will remain in the driver's seat, which will be reflected in aggressive software pricing and shared benefits implementation models, such as that introduced by Epicor late this year. 

7. Enterprise software risk management. As CIOs and executive teams remain on the hot seat to prove the value of their investments, risk management will be the name of the game. Look for more ERP implementations to leverage organizational change management and independent oversight of software vendors to help mitigate business risk. 

8. Software vendor consolidation. Vendor competition was fierce before the recession and is even more so now. Dozens of smaller vendors are starved for cash and unable to fuel R&D and other product innovations without infusions of capital. Add the fact that larger vendors have cash and some have grown successfully via acquisition to date (e.g. Oracle and Infor), and continued vendor consolidation looks inevitable. 

9. Focus on integration rather than major product enhancements. Given corporate aversion to risk, companies are going to be less likely to bet on entirely new products or risky upgrades. As a result, vendors are more likely to invest in incremental product enhancements and tighter integration between modules rather than revolutionary changes to their software. 

10. Niches, low-hanging fruit, and business value. Look for companies to be very deliberate about how they invest in enterprise software, the risk they're willing to take, and how they manage implementations. If executives aren't convinced that their enterprise software investments will deliver measurable business value, they won't invest in it. Areas that deliver immediate value are priorities for the coming year. 

We are optimistic about the coming year and can't help but wonder if the economic recession exactly what the enterprise software market needed. ERP failures, cost overruns, difficult software vendors, and lack of business benefits had become too frequent, but these lean times will not allow for these trends to continue. 

So what does this mean to clients and other companies considering ERP investments in the coming year? The companies that choose the right software for their organizations, best manage business and organizational risk, implement effectively, and position themselves for benefits realization will be better positioned headed into the recovery. This will require companies to more effectively assess vendor viability during their ERP selection processes and leverage ERP implementation best practices more than they have in the past. 

Top Trends in ERP for 2010

http://blogs.dlt.com/top-10-trends-erp-2010-part/
http://blogs.dlt.com/erp-forecast-top-trends-erp-2010-part-ii/
http://blogs.dlt.com/top-trends-erp-2010-part-iii/
http://blogs.dlt.com/top-trends-in-erp-2010-part-iv/


  1. Upgrade and footprint expansion activity.
  2. Open Source.
  3. Small businesses going ERP sooner.
  4. Mobile ERP.
  5.  New enterprise resource functionality: energy utilization
  6.  3rd party support vs. maintenance contract renewals
  7.  New-growth markets
  8.  Expanding ERP
  9.  SaaS is probably the most significant non-ERP trend to look forward to
  10.  Micro-Verticalization will be delivered by channel partners.

Saturday, December 4, 2010

꿈의 해석

프로이트 이전 이후의 꿈에 대한 해석

이전: 꿈->예견->미래
이후:꿈->과거의 억압된 소망

Wednesday, December 1, 2010

hadoop on cygwin


Hadoop is a distributed computing platform.
Hadoop primarily consists of the Hadoop Distributed FileSystem (HDFS) and an implementation of the Map-Reduce programming paradigm.
Hadoop is a software framework that lets one easily write and run applications that process vast amounts of data. Here's what makes Hadoop especially useful:
  • Scalable: Hadoop can reliably store and process petabytes.
  • Economical: It distributes the data and processing across clusters of commonly available computers. These clusters can number into the thousands of nodes.
  • Efficient: By distributing the data, Hadoop can process it in parallel on the nodes where the data is located. This makes it extremely rapid.
  • Reliable: Hadoop automatically maintains multiple copies of data and automatically redeploys computing tasks based on failures.

Requirements

Platforms

  • Hadoop was been demonstrated on GNU/Linux clusters with 2000 nodes.
  • Win32 is supported as a development platform. Distributed operation has not been well tested on Win32, so this is not a production platform.

Requisite Software

  1. Java 1.6.x, preferably from Sun. Set JAVA_HOME to the root of your Java installation.
  2. ssh must be installed and sshd must be running to use Hadoop's scripts to manage remote Hadoop daemons.
  3. rsync may be installed to use Hadoop's scripts to manage remote Hadoop installations.

Additional requirements for Windows

  1. Cygwin - Required for shell support in addition to the required software above.

Installing Required Software

If your platform does not have the required software listed above, you will have to install it.
For example on Ubuntu Linux:

$ sudo apt-get install ssh

$ sudo apt-get install rsync


On Windows, if you did not install the required software when you installed cygwin, start the cygwin installer and select the packages:
  • openssh - the "Net" category
  • rsync - the "Net" category

Getting Started

First, you need to get a copy of the Hadoop code.
Edit the file conf/hadoop-env.sh to define at least JAVA_HOME.
Try the following command:
bin/hadoopThis will display the documentation for the Hadoop command script.

Standalone operation

By default, Hadoop is configured to run things in a non-distributed mode, as a single Java process. This is useful for debugging, and can be demonstrated as follows:
mkdir input
cp conf/*.xml input
bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+'
cat output/*
This will display counts for each match of the regular expression.
Note that input is specified as a directory containing input files and that output is also specified as a directory where parts are written.

Distributed operation

To configure Hadoop for distributed operation you must specify the following:
  1. The NameNode (Distributed Filesystem master) host. This is specified with the configuration property fs.default.name.
  2. The JobTracker (MapReduce master) host and port. This is specified with the configuration property mapred.job.tracker.
  3. slaves file that lists the names of all the hosts in the cluster. The default slaves file is conf/slaves.

Pseudo-distributed configuration

You can in fact run everything on a single host. To run things this way, put the following in:

conf/core-site.xml:
<configuration> <property> <name>fs.default.name</name> <value>hdfs://localhost/</value> </property> </configuration>conf/hdfs-site.xml:<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration>conf/mapred-site.xml:<configuration> <property> <name>mapred.job.tracker</name> <value>localhost:9001</value> </property> </configuration>(We also set the HDFS replication level to 1 in order to reduce warnings when running on a single node.)
Now check that the command
ssh localhost
does not require a password. If it does, execute the following commands:
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

Bootstrapping

A new distributed filesystem must be formatted with the following command, run on the master node:
bin/hadoop namenode -format
The Hadoop daemons are started with the following command:
bin/start-all.sh
Daemon log output is written to the logs/ directory.
Input files are copied into the distributed filesystem as follows:
bin/hadoop fs -put input input

Distributed execution

Things are run as before, but output must be copied locally to examine it:
bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+'
bin/hadoop fs -get output output cat output/*
When you're done, stop the daemons with:
bin/stop-all.sh

Fully-distributed operation

Fully distributed operation is just like the pseudo-distributed operation described above, except, specify:
  1. The hostname or IP address of your master server in the value for fs.default.name, as hdfs://master.example.com/ in conf/core-site.xml.
  2. The host and port of the your master server in the value of mapred.job.tracker as master.example.com:port in conf/mapred-site.xml.
  3. Directories for dfs.name.dir and dfs.data.dir in conf/hdfs-site.xmlThese are local directories used to hold distributed filesystem data on the master node and slave nodes respectively. Note that dfs.data.dir may contain a space- or comma-separated list of directory names, so that data may be stored on multiple local devices.
  4. mapred.local.dir in conf/mapred-site.xml, the local directory where temporary MapReduce data is stored. It also may be a list of directories.
  5. mapred.map.tasks and mapred.reduce.tasks in conf/mapred-site.xml. As a rule of thumb, use 10x the number of slave processors for mapred.map.tasks, and 2x the number of slave processors for mapred.reduce.tasks.
Finally, list all slave hostnames or IP addresses in your conf/slaves file, one per line. Then format your filesystem and start your cluster on your master node, as above.

Hadoop 0.20.S Virtual Machine Appliance


http://developer.yahoo.com/blogs/hadoop/posts/2010/06/hadoop_020s_virtualmachine/


Hadoop 0.20.S Virtual Machine Appliance

At Yahoo!, we recently implemented a stronger notion of security for the Hadoop platform, based on Kerberos as underlying authentication system. We also successfully enabled this feature within Yahoo! on our internal data processing  clusters. I am sure many Hadoop developers and enterprise users are looking forward to get hands-on experience with this enterprise-class Hadoop Security feature.
In the past, we've aided developers and users get started with Hadoop by hosting a comprehensive Hadoop tutorial on YDN, along with a pre-configured single node Hadoop (0.18.0) Virtual Machine appliance.
This time, we decided to upgrade this Hadoop VM with a pre-configured single node Hadoop 0.20.S cluster, along with required Kerberos system components. We have also included Pig (version 0.7.0), a high level SQL-like data processing language used at Yahoo!.
This blog post describes how to get started with the Hadoop 20.S VM appliance. The basic information about downloading, setting up VM Player, and using the Hadoop VM is same as described in the tutorial module-3 — except the user has to use the following information and links to download the latest VM Player and  Hadoop 0.20.S VM Image. You should also review the following information for security-specific commands that need to be performed before running M/R or Pig jobs.
For more details on deploying and configuring Yahoo! Hadoop 0.20.S security distribution, look for continuing announcements and details on Hadoop-YDN.

Installing and Running the Hadoop 0.20.S Virtual Machine:

  • Virtual Machine and Hadoop environment: See details here.
  • Install VMware Player: See details here. To download latest VMware Player for Windows/Linux, go to Vmware site
  • Setting up the Virtual Environment for Hadoop 0.20.S:
Copy the [Hadoop 0.20.S Virtual Machine] into a location on your hard drive. It is a zipped vmware folder (hadoop-vm-appliance-0-20-S, appriox ~400MB), which includes a few files: a .vmdk file that is a snapshot of the virtual machine's hard drive, and a .vmx file that contains the configuration information to start the virtual machine. After unzipping the vmware folder zip file, to start the virtual machine, double-click on the hadoop-appliance-0.20.S.vmx file.  Note: Uncompressed Size of hadoop-vm-appliance-0-20-S folder is ~2GB. Also, based on that data you upload for testing, VM disk is configured to grow up to 20GB). When you start the virtual machine for the first time, VMware Player will recognize that the virtual machine image is not in the same location it used to be. You should inform VMware Player that you copied this virtual machine image (choose "I copied it"). VMware Player will then generate new session identifiers for this instance of the virtual machine. If you later move the VM image to a different location on your own hard drive, you should tell VMware Player that you have moved the image. After you select this option and click OK, the virtual machine should begin booting normally. You will see it perform the standard boot procedure for a Linux system. It will bind itself to an IP address on an unused network segment, and then display a prompt allowing a user to log in. Note: The IP address displayed on the login screen can be used to connect to VM instance over SSH. The Login screen also displays information about starting/stopping Hadoop daemons, users/passwords, and how to shutdown the VM. Note: It is much more convenient to access the VM via SSH. See details here.
  • Virtual Machine User Accounts:
The virtual machine comes pre-configured with two user accounts: "root" and  "hadoop-user". The hadoop-user account has sudo permissions to perform system-management functions, such as shutting down the virtual machine. The vast majority of your interaction with the virtual machine will be as hadoop-user. To log in as hadoop-user, first click inside the virtual machine's display. The virtual machine will take control of your keyboard and mouse. To escape back into Windows at any time, press CTRL+ALT at the same time. The hadoop-user user's password is hadoop. To log in as root, the password is root.
  • Hadoop Environment:
Linux    : Ubuntu 8.04
Java       : JRE 6 Update 7 (See License info @ /usr/jre16/) Hadoop : 0.20.S  (installed @ /usr/local/hadoop,  /home/hadoop-user/hadoop is symlink to install directory) Pig         : 0.7.0 (pig jar is installed @ /usr/local/pig,  /home/hadoop-user/pig-tutorial/pig.jar  is  symlink to 
one in install directory)
Login: hadoop-user, Passwd: hadoop (sudo privileges are granted for hadoop-user). The other usrers are hdfs and mapred (passwd: hadoop). Hadoop VM starts all the required hadoop and Kerberos daemons during the boot-up process, but in case the user needs to stop/restart,
  • To start/stop/restart hadoop: login as hadoop-user and run 'sudo /etc/init.d/hadoop [start | stop | restart]' ('sudo /etc/init.d/hadoop' gives the usage)
  • To format the HDFS & clean all state/logs: login as hadoop-user and run 'sudo reinit-hadoop'
  • To start/stop/restart Kerberos KDC Server: login as hadoop-user and run 'sudo /etc/init.d/krb5-kdc [start | stop | restart]'
  • To start/stop/restart Kerberos ADMIN Server: login as hadoop-user and run 'sudo /etc/init.d/krb5-admin-server [start | stop | restart]'
To shut down the Virtual Machine: login as hadoop-user and run command 'sudo poweroff' Environment for 'hadoop-user' (set in /home/hadoop-user/.profile)   $HADOOP_HOME=/usr/local/hadoop   $HADOOP_CONF_DIR=/usr/local/etc/hadoop-conf   $PATH=/usr/local/hadoop/bin:$PATH
  • Running M/R Jobs:
Running M/R jobs in Hadoop 0.20.S is pretty much same as running them in non-secure version of Hadoop. Except before running any Hadoop Jobs or HDFS commands, the hadoop-user needs to get the Kerberos authentication token using the command 'kinit'; the password is hadoopYahoo1234.
For example: hadoop-user@hadoop-desk:~$ cd hadoop hadoop-user@hadoop-desk:~$ kinit Password for hadoop-user@LOCALDOMAIN:  hadoopYahoo1234 hadoop-user@hadoop-desk:~/hadoop$ bin/hadoop jar hadoop-examples-0.20.104.1.1006042001.jar pi 10 1000000
For automated runs of hadoop jobs, a keytab file is created under the hadoop-user's home directory (/home/hadoop-user/hadoop-user.keytab). This will allow user to execute the "kinit" without having to manually enter the password. So for automated runs of hadoop commands or M/R, Pig jobs through the cron daemon, users can invoke the following command to get the Kerberos ticket. Use command 'klist' to view the Kerberos ticket and its validity.
For example: hadoop-user@hadoop-desk:~$ cd hadoop hadoop-user@hadoop-desk:~$ kinit -k -t /home/hadoop-user/hadoop-user.keytab hadoop-user/localhost@LOCALDOMAIN hadoop-user@hadoop-desk:~/hadoop$ bin/hadoop jar hadoop-examples-0.20.104.1.1006042001.jar pi 10 1000000
  • Running Pig Tutorial:
The Pig tutorial is installed at "/home/hadoop-user/pig-tutorial". Example commands to run the Pig script are given in "example.run.cmd.sh". The Data needed for Pig scripts are already copied to HDFS. See more details about the Pig Tutorial at Pig@Apache
  • hadoop-user@hadoop-desk:~$ cd pig-tutorial
  • hadoop-user@hadoop-desk:~$ sh example.run.cmd.sh
  • Shutting down the VM:
When you are done with the virtual machine, you can turn it off by logging in as the hadoop-user and running the command 'sudo poweroff'. The virtual machine will shut itself down in an orderly fashion and the window it runs in will disappear.
Last but not least, I would like to thank Devaraj Das and Jianyong Dai from the Yahoo! Hadoop & Pig Develoment team for their help in setting up and configuring Hadoop 0.20.S and Pig respectively.
Notice: Yahoo! does not offer any support for the Hadoop Virtual Machine. The software include cryptographic software that is subject to U.S. export control laws and applicable export and import laws of other countries. BEFORE using any software made available from this site, it is your responsibility to understand and comply with these laws. This software is being exported in accordance with the Export Administration Regulations. As of June 2009, you are prohibited from exporting and re-exporting this software to Cuba, Iran, North Korea, Sudan, Syria and any other countries specified by regulatory update to the U.S. export control laws and regulations. Diversion contrary to U.S. law is prohibited.