Friday, May 30, 2014

Google Project Ara Modular Smartphone Release Date -Early 2015

Google Project Ara Modular Smartphone Release Date: Basic Phone Costs $50 Including Basic Modules, Endo Alone Costs $15

First Posted: May 24, 2014 12:25 PM EDT
project ara
(Photo : Facebook/Project Ara)
Google is preparing for the release of its Project Ara, which is set for early 2015.
Project Ara is a device that will allow users to create their own smartphones and modify the phone per their needs. Providing Project Ara three types of processors is Toshiba, which will consist of chips that will be incorporated in the phone and in different modules.
Featuring an endoskeleton, the phone will have various modules on its front and back. The users can add 5 - 10 modules based on the size of the skeleton so it can complete different functions. To regulate data flow, various processors are needed between different components. The modules are used for plugin and plugout elements, such as putting in a new processor, battery, display, and keyboard as well as removing the old ones.
Alternatively, the endoskeleton has a structural frame, is 10 mm thick with modules in specific areas, and is available in three sizes -- mini, medium and large. More modules are featured on the large sizes compared to the mini sizes. The frame allows users to add objects such as a credit card reader or camera that fastens to the frame. The frame is either made of molded metal or polycarbonate material.

Google's Project Ara phone is anticipated to be a sell out in a year at $50 which would include the basic modules. Additional payment would be required for the advanced modules and can be bought from different developers. Project Ara phone endoskeletons are estimated to be $15.Toshiba joined Google in October 2013 to work on Project Ara.Toshiba was the lone approved Japanese supplier. It will also be the phone's only chipmaker after it is launched in a year.
The idea of creating Project Ara started in the Advanced Technology Group of Motorola. Since the group was not included in the imminent buyout of Motorola to Lenovo, the project was transferred to Google. The original plan was to build upgradable and customizable modular smartphones.

CDH 5.0 & SPARK



http://archive.cloudera.com/cdh5/one-click-install/redhat/6/x86_64/cloudera-cdh-5-0.x86_64.rpm

$ sudo yum --nogpgcheck localinstall cloudera-cdh-5-0.x86_64.rpm

$ sudo rpm --import http://archive.cloudera.com/cdh5/redhat/6/x86_64/cdh/RPM-GPG-KEY-cloudera

$ rpm -ql hadoop-conf-pseudo


$sudo -u hdfs hdfs namenode -format

 $for x in `cd /etc/init.d ; ls hadoop-hdfs-*` ; do sudo service $x start ; done

$sudo -u hdfs hadoop fs -rm -r /tmp

$ sudo -u hdfs hadoop fs -mkdir -p /tmp/hadoop-yarn/staging/history/done_intermediate
$ sudo -u hdfs hadoop fs -chown -R mapred:mapred /tmp/hadoop-yarn/staging
 $ sudo -u hdfs hadoop fs -chmod -R 1777 /tmp
 $ sudo -u hdfs hadoop fs -mkdir -p /var/log/hadoop-yarn
 $ sudo -u hdfs hadoop fs -chown yarn:mapred /var/log/hadoop-yarn

$ sudo service hadoop-yarn-resourcemanager start
$ sudo service hadoop-yarn-nodemanager start
$ sudo service hadoop-mapreduce-historyserver start

$ sudo -u hdfs hadoop fs -mkdir /user
$ sudo -u hdfs hadoop fs -mkdir /user/wylee
$ sudo -u hdfs hadoop fs -chown wylee /user/wylee

$ hadoop fs -mkdir input
$ hadoop fs -put /etc/hadoop/conf/*.xml input
$ hadoop fs -ls input

 $ export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce

$ hadoop fs -ls
$ hadoop fs -ls output23

$ sudo yum install spark-core spark-master spark-worker spark-python

 $sudo service spark-master start
 $sudo service spark-worker start
$/etc/spark/conf/spark-env.sh

http://localhost:18080/

$ spark-shell

val file = sc.textFile("hdfs://localhost:8020/tmp/input")
val counts = file.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_ + _)
counts.saveAsTextFile("hdfs://localhost:8020/tmp/output")

서울 출발 한화콘도 지리산/ 2박 3일 여행 일정

지리산 2 3일 여행

 일정




최소주행


1일차
서울

0:30


전주

2:30

전주 한옥 마을
점심 : 비빔밥

남원

1:30
광한루
춘향 테마파크
항공 우주 천문대

콘도
1:00


2일차
콘도

화엄사 (?)

순천만
1:00
순천 오픈 세트장
순천만 정원
순천만 자연 생태공원
순천만 갈대밭

여수
1:00
스카이타워
엑스초 디지털 갤러리(EDG),
엑스포기념관,
아쿠아리움
스카이플라이
오동도

하멜공원
돌산대교
돌산공원

콘도
1:30


3일차
콘도



곡성
1:00
곡성 기차 마을
ü  곡성 장미 정원
ü  증기 기관차
ü  드림랜드
ü  음악분수
ü  천적곤충전

담양
1:00
담양 죽녹원
메타세콰이어길
대나무 테마파크

서울
3:30






1일차 :
   (*) 교통 상황에 따라 2,3 중 선택
1.      서울 ->담양->남원 -> 콘도
2.      서울->전주 한옥마을--> 담양 죽녹원(죽녹원/메타세콰이어길/대나무테마파크) -> 남원 광한루 -> 한화리조트 지리산
3.      서울->전주 한옥마을 -> 남원 광한루/춘향 테마파크/항공 우주 천문대 -> 한화리조트 지리산
  
l  식사
n  점심 : 전주 비빔밥
n  담양 떡갈비+죽통밥
n  남원 추어탕 골목, 모심정
l  관광지
n  전주 한옥 마을 : http://hanok.jeonju.go.kr/
n  담양 죽녹원 : http://juknokwon.go.kr/
n  남원 광한루, 춘향 테마파크 : http://www.namwontheme.or.kr/
n  남원 항공 우주 천문대 : http://spica.namwon.go.kr/
l  참고자료
l  전주 한옥 마을
l  죽녹원
l  메타쉐콰이어 길

l  맛집
n  진주 한옥 마을 길거리야
n  진주 한옥 마을 디저트 외할머니 솜씨
n  남원은 추어탕이 좋습니다. 유명세를 탄 식당보다는 광한루 곡성쪽으로 나오셔 시내쪽으로 보면 현식당이라고 있습니다. 이 집 추어탕 정말 죽입니다. 그리고 곡성 시장안에 추어탕집이 있는데 이집도 환상적입니다. 그리고 시내 복판에 한식집이 있는데 이름이 대추나무집이던가 국물부터 환상적입니다.





2일차 : 한화리조트 -> 순천  -> 보성 녹차밭->여수 세계박람회 엑스포 공원-> -> 여수 오동도 / 향일암 -> 한화리조트
1.      한화리조트 -> 순천 -> 보성 -> 여수 -> 한화리조트
2.      한화리조트 -> 순천 (갈대밭/ 자연생태공원/순천만정원/드라마세트장) -> 여수 (스카이타워,EDG,엑스포기념관, 아쿠아리움, 스카이플라이,아쿠아리움/ 오동도) -> 여수 (오동도 / 향일암  / 돌산공원 // 돌산대교// 하멜공원 / 고소동 벽화골목) -> 한화리조트

l  식사
n  여수 10
1.      서대회/게장백반/한정식/굴구이/장어구이,/군핑서니/갯장어회/생선회/돌산갓김치/꽃게탕,..
l  관광지
n  순천만 갈대밭/순천만 자연생태공원- 순천만정원 - 순천 드라마 세트장
n  여수 엑스포 해양 공원 : http://www.expo2012.kr/main.html
1.      스카이타워,EDG,엑스포기념관, 아쿠아리움, 스카이플라이,아쿠아리움
n  여수 오동도 / 향일암 / 돌산공원 / 돌산공원 // 돌산대교// 하멜공원 // 고소동 벽화골목
n  보성 녹차밭 : http://dhdawon.com/page/main.asp <== 시간이 될까 ..
l  오동도(유람선, 동백열차)
l  향일암 일출 <= 올라가는 길이 경사 급함.
l  돌산대교 야경(돌산공원 에서)
l  순천 건봉국밥
l  보성녹차밭
l  여수 엑스초 해양공원 아쿠아리움
l  여수 교동시장






3일차 : 한화리조트 -> 노고단 -> 화엄사->섬진강 기차마을-> 담양죽녹원 (?) ->서울


l  식사
n   ..
l  관광지
n  화엄사 :
n  지리산 노고단
n  섬진강 기차마을 : http://www.gstrain.co.kr/
1.      장미공원, 증기기관차, 드림랜드, 음악분수, 천적곤충전
l  섬진강 기차마을 - 장미공원/증기기관차(예약필수)



Thursday, May 15, 2014

OPENSTACK COMPUTE FOR VSPHERE ADMINS, PART 1: ARCHITECTURAL OVERVIEW

source: http://cloudarchitectmusings.com/2013/06/24/openstack-for-vmware-admins-nova-compute-with-vsphere-part-1/

OPENSTACK COMPUTE FOR VSPHERE ADMINS, PART 1: ARCHITECTURAL OVERVIEW

vmwarelovesopenstack[I've updated this series with information based on the new Havana release and changes in vSphere integration with the latest version of OpenStack.  Any changes are prefaced with the word "Havana" in bold brackets]
One of the most common questions I receive is “How does OpenStack compare with VMware?”  That question comes, not only from customers, but from my peers in the vendor community, many of whom have lived and breathed VMware technology for a number of years.  Related questions include, “Which is a better cloud platform,” “Who has the better hypervisor,” and “why would I use one vs. another?”  Many of these are important and sound questions; others, however, reveal a common misunderstanding of how OpenStack matches up with VMware.
The misunderstanding is compounded by the tech media and frankly, IT folks who often fail to differentiate between vSphere and other VMware technologies such as vCloud Director.  It is not enough to just report that a company is running VMware; are they just running vSphere or are they also using other VMware products such as vCloud Director, vCenter Operations Manager, or vCenter Automation Center?  If a business announces they are building an OpenStack Private Cloud, instead of using the VMware vCloud Suite, does it necessarily imply that they are throwing out vSphere as well?  The questions to be asked in such a scenario are “which hypervisors will be used with OpenStack” and “will the OpenStack Cloud be deployed alongside a legacy vSphere environment?”
What makes the first question particularly noteworthy is the amount of code that VMware has contributed to the recent releases of OpenStack.  Ironically, the OpenStack drivers for ESX were initially written, not by VMware, but by Citrix.  These drivers provided limited integration with OpenStack Compute and were, understandably, not well maintained.  However, with VMware now a member of the OpenStack foundation, their OpenStack team has been busy updating and contributing new code.  I believe deeper integration between vSphere and OpenStack is critical for users who want to move to an Open Cloud platform, such as OpenStack, but have existing investments in VMware and legacy applications that run on the vSphere platform.
In this and upcoming blog posts, I will break down how VMware compares to and integrates with OpenStack, starting with the most visible and well know component of OpenStack – Nova compute.  I’ll start by providing an overview of the Nova architecture and how vSphere integrates with that project.  In subsequent posts, I will provide  design and deployment details for an OpenStack Cloud that deploys multiple hypervisors, including vSphere.
OpenStack Architecture
OpenStack is built on a shared-nothing, messaging-based architecture with modular components that each manage a different service; these services, work together to instantiate an IaaS Cloud.
openstack-arch-grizzly-v1-logicalA full discussion of the entire OpenStack Architecture is beyond the scope of this post.  For those who are unfamiliar with OpenStack or need a refresher, I recommend reading Ken Pepple’s excellent OpenStack overview blog post and my “Getting Started With OpenStack” slide deck.  I will focus, in this post, on the compute component of OpenStack, called Nova.
Nova Compute
Nova compute is the OpenStack component that orchestrates the creation and deletion of compute/VM instances.  To gain a better understanding of how Nova performs this orchestration, you can read the relevant section of the “OpenStack Cloud Administrator Guide.”  Similar to other OpenStack components, Nova is based on a modular architectural design where services can be co-resident on a single host or more commonly, on multiple hosts.
nova-computeThe core components of Nova include the following:
  • The nova-api accepts and responds to end-user compute API calls.  It also initiates most of the orchestration activities (such as running an instance) as well as enforcing some policies.
  • The nova-compute process is primarily a worker daemon that creates and terminates virtual machine instances via hypervisor APIs (XenAPI for XenServer/XCP, libvirt for KVM or QEMU, VMwareAPI for vSphere, etc.).
  • The nova-scheduler process is conceptually the simplest piece of code in OpenStack Nova: it take a virtual machine instance request from the queue and determines where it should run (specifically, which compute node it should run on).
Although many have mistakenly made direct comparisons between OpenStack Nova and vSphere, that is actually quite inaccurate since Nova actually sits at a layer above the hypervisor layer.  OpenStack in general and Nova in particular, is most analogous to vCloud Director (vCD) and vCloud Automation Center (vCAC), and not ESXi or even vCenter.  In fact, it is very important to remember that Nova itself does NOT come with a hypervisor but manages multiple hypervisors, such as KVM or ESXi.  Nova orchestrate these hypervisors via APIs and drivers.  The list of supported hypervisors include KVM, vSphere, Xen, and others; a detailed list of what is supported can be found on the OpenStack Hypervisor Support Matrix.
Screen Shot 2013-06-24 at 1.19.41 PMNova manages it’s supported hypervisors through APIs and native management tools.  For example, Hyper-V is managed directly by Nova, KVM is managed via a virtualization management tool called libvirt, while Xen and vSphere can be manged directly or through a management tool such as libvirt  and vCenter for vSphere respectively.
vSphere Integration with OpenStack Nova
OpenStack Nova compute manages vSphere 4.1 and higher through two compute driver options provided by VMware – vmwareapi.VMwareESXDriver and vmwareapi.VMwareVCDriver:
  • The vmwareapi.VMwareESXDriver driver, originally written by Citrix and subsequently updated by VMware, allows Nova to communicate directly to an ESXi host via the vSphere SDK.
  • The vmwareapi.VMwareVCDriver driver, developed by VMware initially for the Grizzly release, allows Nova to communicate with a VMware vCenter server managing a cluster of ESXi hosts.  [Havana] In the new Havana release a single vCenter server can manage multiple clusters.
Let’s talk more about these drivers and how Nova leverages them to manage vSphere.  Note that I am focusing specifically on compute and tabling discussions regarding vSphere networking and storage to other posts.
ESXi Integration with Nova (vmwareapi.VMwareESXDriver)
vmwareapi_blockdiagramLogically, the nova-compute service communicates directly to the ESXi host; vCenter is not in the picture.  Since vCenter is not involved, using the ESXDriver means advanced capabilities, such as vMotion, High Availability, and Dynamic Resource Scheduler (DRS), are not available.  Also, in terms of physical topology, you should note the following:
  • Unlike Linux kernel based hypervisors, such as KVM, vSphere with OpenStack requires the VM instances to be hosted on an ESXi server distinct from a Nova compute node, which must run on some flavor of Linux.  In contrast, VM instances running on KVM can be hosted directly on a Nova compute node.
  • Although a single OpenStack installation can support multiple hypervisors, each compute node will support only one hypervisor.  So any multi-hypervisor OpenStack Cloud requires at least one compute node for each hypervisor type.
  • Currently, the ESXDriver has a limit of one ESXi host per Nova compute service.
ESXDriverESXi Integration with Nova (vmwareapi.VMwareVCDriver)
[Havana] Logically, the nova-compute service communicates to a vCenter Server, which handles management of one or more ESXi clusters.  With vCenter in the picture, using the VCDriver means advanced capabilities, such as vMotion, High Availability, and Dynamic Resource Scheduler (DRS), are now supported.  However, since vCenter abstracts the ESXi hosts from the nova-compute service, the nova-scheduler views each cluster as a single compute/hypervisor node with resources amounting to the aggregate resources of all ESXi hosts managed by that cluster.  This can cause issues with how VMs are scheduled/distributed across a multi-compute node OpenStack environment.
[Havana] For now, look at the diagram below, courtesy of VMware.  Note that nova-scheduler sees each cluster as a hypervisor node; we’ll discuss in another post how this impacts VM scheduling and distribution in a multi-hypervisor Cloud with vSphere in the mix.  I also want to highlight that the VCDriver integrates with the OpenStack Image service, aka. Glance, to instantiate VMs with full operating systems installed and not just bare VMs.  I will be doing a write–up on that integration in a later post.
vSphere with Nova Arch[Havana] Puling back a bit, you can begin to see below how vSphere integrates architecturally with OpenStack alongside a KVM environment.  We talk more about that as well in another post.
multi-hv havanaAlso, in terms of physical topology, you should note the following:
  • Unlike Linux kernel based hypervisors, such as KVM, vSphere with vCenter on OpenStack requires a separate vCenter Server host and that the VM instances to be hosted in an ESXi cluster run on ESXi hosts distinct from a Nova compute node.  In contrast, VM instances running on KVM can be hosted directly on a Nova compute node.
  • Although a single OpenStack installation can support multiple hypervisors, each compute node will support only one hypervisor.  So any multi-hypervisor OpenStack Cloud requires at least one compute node for each hypervisor type.
  • To utilize the advanced vSphere capabilities mentioned above, each Cluster must be able to access datastores sitting on shared storage.
  • [Havana] In Grizzly, the VCDriver had a limit of one vSphere cluster per Nova compute node.  With Havana, a single compute node can now manage multiple vSphere clusters.
  • Currently, the VCDriver requires that only one datastore can be configured and used per cluster.
Hopefully, this post has help shed some light on where vSphere integration stands with OpenStack.  In upcoming posts, I will provide more technical and implementation details.
See part 2 for DRS, part 3 for HA and VM Migration, part 4 for Resource Overcommitment and part 5 for Designing a Multi-Hypervisor Cloud.