//honeypot demagogic

 Forum DhammaCitta. Forum Diskusi Buddhis Indonesia

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - hatRed

Pages: 1 2 3 4 5 [6] 7 8 9 10 11 12 13
76
source : http://dbmsmusings.blogspot.com/2009/07/announcing-release-of-hadoopdb-longer.html
by : daniel abadi

Monday, July 20, 2009
Announcing release of HadoopDB (longer version)
If you have a short attention span see the shorter blog post.
If you have a large attention span, see the complete 12-page paper.

There are two undeniable trends in analytical data management. First, the amount of data that needs to be stored and processed is exploding. This is partly due to the increased automation with which data can be produced (more business processes are becoming digitized), the proliferation of sensors and data-producing devices, Web-scale interactions with customers, and government compliance demands along with strategic corporate initiatives requiring more historical data to be kept online for analysis. It is no longer uncommon to hear of companies claiming to load more than a terabyte of structured data per day into their analytical database system and claiming data warehouses of size more than a petabyte (see the end of this write-up for some links to large data warehouses).

The second trend is what I talked about in my last blog post: the increased desire to perform more and more complex analytics and data mining inside of the DBMS.

I predict that the combination of these two trends will lead to a scalability crisis for the parallel database system industry. This prediction flies in the face of conventional wisdom. If you talk to prominent DBMS researchers, they'll tell you that shared-nothing parallel database systems horizontally scale indefinitely, with near linear scalability. If you talk to a vendor of a shared-nothing MPP DBMS, such as Teradata, Aster Data, Greenplum, ParAccel, and Vertica, they'll tell you the same thing. Unfortunately, they're all wrong. (Well, sort of.)

Parallel database systems scale really well into the tens and even low hundreds of machines. Until recently, this was sufficient for the vast majority of analytical database applications. Even the enormous eBay 6.5 petabyte database (the biggest data warehouse I've seen written about) was implemented on a (only) 96-node Greenplum DBMS. But as I wrote about previously, this implementation allows for only a handful of CPU cycles to be spent processing tuples as they are read off disk. As the second trend kicks in, resulting in an increased amount and complexity of data analysis that is performed inside the DBMS, this architecture will be entirely unsuitable, and will be replaced with many more compute nodes with a much larger horizontal scale. Once you add the fact that many argue that it is far more efficient from a hardware cost and power utilization perspective to run an application on many low-cost, low-power machines instead of fewer high-cost, high-power machines (see e.g., the work by James Hamilton), it will not be at all uncommon to see data warehouse deployments on many thousands of machines (real or virtual) in the future.

Unfortunately, parallel database systems, as they are implemented today, do not scale well into the realm of many thousands of nodes. There are a variety of reasons for this. First, they all compete with each other on performance. The marketing literature of MPP database systems are littered with wild claims of jaw-dropping performance relative to their competitors. These systems will also implement some amount of fault tolerance, but as soon as performance becomes a tradeoff with fault tolerance (e.g. by implementing frequent mid-query checkpointing) performance will be chosen every time. At the scale of tens to hundreds of nodes, a mid-query failure of one of the nodes is a rare event. At the scale of many thousands of nodes, such events are far more common. Some parallel database systems lose all work that has been done thus far in processing a query when a DBMS node fails; others just lose a lot of work (Aster Data might be the best amongst its competitors along this metric). However, no parallel database system (that I'm aware of) is willing to pay the performance overhead to lose a minimal amount of work upon a node failure.

Second, while it is possible to get reasonably homogeneous performance across tens to hundreds of nodes, this is nearly impossible across thousands of nodes, even if each node runs on identical hardware or on an identical virtual machine. Part failures that do not cause complete node failure, but result in degraded hardware performance become more common at scale. Individual node disk fragmentation and software configuration errors can also cause degraded performance on some nodes. Concurrent queries (or, in some cases, concurrent processes) further reduce the homogeneity of cluster performance. Furthermore, we have seen wild fluctuations in node performance when running on virtual machines in the cloud. Parallel database systems tend to do query planning in advance and will assign each node an amount of work to do based on the expected performance of that node. When running on small numbers of nodes, extreme outliers from expected performance are a rare event, and it is not worth paying the extra performance overhead for runtime task scheduling. At the scale of many thousands of nodes, extreme outliers are far more common, and query latency ends up being approximately equal to the time it takes these slow outliers to finish processing.

Third, many parallel databases have not been tested at the scale of many thousands of nodes, and in my experience, unexpected bugs in these systems start to appear at this scale.

In my opinion the "scalability problem" is one of two reasons why we're starting to see Hadoop encroach on the structured analytical database market traditionally dominated by parallel DBMS vendors (see the Facebook Hadoop deployment as an example). Hadoop simply scales better than any currently available parallel DBMS product. Hadoop gladly pays the performance penalty for runtime task scheduling and excellent fault tolerance in order to yield superior scalability. (The other reason Hadoop is gaining market share in the structured analytical DBMS market is that it is free and open source, and there exists no good free and open source parallel DBMS implementation.)

The problem with Hadoop is that it also gives up some performance in other areas where there are no tradeoffs for scalability. Hadoop was not originally designed for structured data analysis, and thus is significantly outperformed by parallel database systems on structured data analysis tasks. Furthermore, it is a relatively young piece of software and has not implemented many of the performance enhancing techniques developed by the research community over the past few decades, including direct operation on compressed data, materialized views, result caching, and I/O scan sharing.

Ideally there would exist an analytical database system that achieves the scalability of Hadoop along with with the performance of parallel database systems (at least the performance that is not the result of a tradeoff with scalability). And ideally this system would be free and open source.

That's why my students Azza Abouzeid and Kamil Bajda-Pawlikowski developed HadoopDB. It's an open source stack that includes PostgreSQL, Hadoop, and Hive, along with some glue between PostgreSQL and Hadoop, a catalog, a data loader, and an interface that accepts queries in MapReduce or SQL and generates query plans that are processed partly in Hadoop and partly in different PostgreSQL instances spread across many nodes in a shared-nothing cluster of machines. In essence it is a hybrid of MapReduce and parallel DBMS technologies. But unlike Aster Data, Greenplum, Pig, and Hive, it is not a hybrid simply at the language/interface level. It is a hybrid at a deeper, systems implementation level. Also unlike Aster Data and Greenplum, it is free and open source.

Our paper (that will be presented at the upcoming VLDB conference in the last week of August) shows that HadoopDB gets similar fault tolerance and ability to tolerate wild fluctuations in runtime node performance as Hadoop, while still approaching the performance of commercial parallel database systems (of course, it still gives up some performance due to the above mentioned tradeoffs).

Although HadoopDB currently is built on top of PostgreSQL, other database systems can theoretically be substituted for PostgreSQL. We have successfully been able to run HadoopDB using MySQL instead, and are currently working on optimizing connectors to open source column-store database systems such as MonetDB and Infobright. We believe that swtiching from PostgreSQL to a column-store will result in even better performance on analytical workloads.

The initial release of the source code for HadoopDB can be found at http://db.cs.yale.edu/hadoopdb/hadoopdb.html. Although at this point this code is just an academic prototype and some ease-of-use features are yet to be implemented, I hope that this code will nonetheless be useful for your structured data analysis tasks!

77
Watch this on-demand webinar about Apache Hadoop, (https://dct.sun.com/dct/forms/reg_us_2005_941_0.jsp) a distributed computing platform that can use thousands of networked nodes to process vast amounts of data. In this webinar, you will learn how Sun's chip multithreading (CMT) technology-based UltraSPARC T2 Plus processor can process up to 256 tasks in parallel within a single node.

We will also share how we evaluated CPU and I/O throughput, memory size, and task counts to extract maximal parallelism per single node.

You will learn about:

    * Scale
      How to use Hadoop to store and process petabytes of data
    * Performance
      How to maximize parallelism per node, and the results of tests varying the number of nodes, and integrating Flash memory drives
    * Virtualization
      How we created multiple virtual nodes using Solaris Containers
    * Reliability
      How Hadoop automatically maintains multiple copies of data and redeploys tasks based on failures
    * Deployment Options
      How Hadoop can be run in the "cloud" on Amazon EC2/3 services and in compute farms and high-performance computing (HPC) environments

If you have any questions or feedback, please send a message to newheights [at] sun.com.

Thank you,
Sun Microsystems, Inc.

P.S. Check out over 125 system configs available for free trial. Get our free catalog. ( http://communications1.sun.com/r/c/r?2.1.3J1.2U2.14QzVc.CLK%2aSS..N.GIqy.2Pte.aT1zZW5kbWFpbDJpcnZhbkB5YWhvby5jby5pZCZtbz0xBXMCIYf0 )

78
Sains / Best Sperm for the Job
« on: 22 July 2009, 02:23:09 PM »
source : http://www.technologyreview.com/biomedicine/21982/



Ranking sperm cells could improve the odds of in vitro fertilization.


By Courtney Humphries



Some approaches to in vitro fertilization involve mixing sperm and egg in a test tube and letting nature take its course. But in about half of all infertility cases, a problem with the man's sperm may require a more direct method. In these cases, a different process, called intracytoplasmic sperm injection (ICSI), in which a single sperm cell is injected directly into an egg, is sometimes used. With this one-shot opportunity, it's important to choose a sperm cell with the best potential for success. A team at the University of Edinburgh, Scotland, has now announced a new technique to ensure that the best sperm win: analyzing their DNA for potential damage beforehand, and choosing those that are structurally sound.

Quote
Scattered light: Alistair Elfick demonstrates a technology called Raman spectroscopy, which uses laser light to identify chemical changes--in this case, it finds sperm with the best DNA.
Credit: Alistair Elfick

"It's a new development that could be very promising," says Alan Penzias, a reproductive endocrinologist at Boston IVF and Harvard Medical School, who was not involved in the research. Penzias explains that current standards for choosing a single sperm cell for an ICSI procedure usually depend on assessing how well the sperm swims; if none of the sperm can swim, a chemical test can find those that are intact and alive. "It's been really pretty crude," he says.

Alistair Elfick, lead scientist for the Edinburgh team, explains that by choosing a single sperm rather than allowing many sperm to swim to and compete for a place in the egg, "you have very much become the arbiter of the quality of that sperm. So clearly, there's a motivation to have a more rigorous selection procedure." With this new technique, the researchers can rank different sperm and choose the one with the most intact DNA. "The endpoint we're moving towards is having a score of DNA quality," Elfick says. But he adds that the approach is an overall measure of the sperm's health; it's not sensitive enough to pick and choose traits.

The method that Elfick and his colleagues developed relies on Raman spectroscopy, a technique that measures the way that molecules scatter photons from a beam of laser light, revealing the molecules' vibrational properties. In order to probe a single sperm cell with Raman spectroscopy, the researchers first pin it down with optical tweezers--a focused laser beam that is able to "trap" a small object like a living cell. The unique scattering produced by each molecule creates a fingerprint of the contents in a sample, allowing scientists to analyze its chemical makeup. In this application, the researchers use Raman spectroscopy to look at the structure of a sperm cell's DNA and determine whether that DNA is broken or intact. Elfick explains that when DNA breaks, a chemical group forms at the ends of the breaks, and they can be detected with Raman spectroscopy.



DNA damage has been associated with cases of male infertility and a loss of sperm's ability to swim. Although the association between DNA breaks and infertility requires more research, Elfick says that "it's highly likely that the better the DNA, the better off the sperm will be."

Preliminary tests suggest that the technique does not harm the cells, although Elfick says that more rigorous testing must be done in order to bring the technique into clinical use. His team is hoping to commercialize this and other applications for Raman spectroscopy, including analyzing breast-cancer cells for specific proteins in order to tailor chemotherapy to individual patients.

Michael Morris, a chemist at University of Michigan who uses Raman spectroscopy to analyze bone, says that many investigators are working on clinical applications for the technique. At the level of individual cells, scientists are using Raman spectroscopy to distinguish normal cells from cancerous ones, and to identify specific strains of bacteria, such as those that cause treatment-resistant infections in hospitals. Raman spectroscopy also holds promise as a way of studying disease directly in patients. Researchers such as MIT's Michael Feld are investigating the possibility of using it in conjunction with minimally invasive probes to look for cancer or other disease processes inside patients' tissues. Denny Sakkas, a scientist at Yale University and Molecular Biometrics, has developed a similar technology called spectrophotometry to evaluate the viability of embryos, and is working to expand it to analyze human eggs. Morris suspects that many new applications will emerge, as the technology has a great deal of power for detecting chemical change in small samples.

79
Sains / Making Light Bulbs from DNA
« on: 22 July 2009, 02:18:01 PM »
source : http://www.technologyreview.com/energy/23042/

Dye-doped DNA nanofibers can be tuned to emit different colors of light.
By Courtney Humphries



By adding fluorescent dyes to DNA and then spinning the DNA strands into nanofibers, researchers at the University of Connecticut have made a new material that emits bright white light. The material absorbs energy from ultraviolet light and gives off different colors of light--from blue to orange to white--depending on the proportions of dye it contains.

Quote

DNA light: Coating an ultraviolet LED with DNA nanofibers containing dyes creates a bulb that emits bright white light.
Credit: Angewandte Chemie

The researchers, led by chemistry professor Gregory Sotzing, create white-light-emitting devices by coating ultraviolet (UV) light-emitting diodes (LEDs) with the material. They are even able to fine-tune the white color tone to make it warm or cold, as they report in a paper published online in the journal Angewandte Chemie.

The new material could be used to make a novel type of organic light bulb. The light emitters should also be longer-lasting because DNA is a very strong polymer, Sotzing says. "It's well beyond other polymers [in strength]," he notes, adding that it lasts 50 times longer than acrylic.

The color-tunable DNA material relies on an energy-transfer mechanism between two different fluorescent dyes. The key is to keep the dye molecules separated at a distance of 2 to 10 nanometers from each other. When UV light is shined on the material, one dye absorbs the energy and produces blue light. If the other dye molecule is at the right distance, it will absorb part of that blue-light energy and emit orange light.

By changing the ratio of the two dyes, the researchers can alter the combined color of light that the material gives off. Varying the amount of dye also lets them make finer tweaks. For example, by increasing the proportion of dye in the DNA from 1.33 percent to 10 percent, they can change the white light from cool to warm. "As you go across the white spectrum, if you want a soft yellow-type light or blue-type light, you can get these very easily with the DNA system," Sotzing says.

Others have used nanostructured materials such as silica nanoparticles and block copolymers--self-assembled materials containing two linked polymer chains--to get the right spacing between the two dyes. But, says David Walt, a chemistry professor at Tufts University, "the advantage in the present system seems to be that the DNA fibers orient the dyes in an optimum way for efficient [fluorescence energy transfer] to occur." Furthermore, when larger amounts of dye are used in the other materials, they start to aggregate. This has two effects: it decreases energy transfer between them, dimming the light output, and it also prevents precise color tuning.



To make the fibers, Sotzing and his colleagues make a solution of salmon DNA and mix in the two types of dye. The solution is pumped slowly out from a fine needle, and a voltage is applied between the needle tip and a grounded copper plate covered with a glass slide. As the liquid jet comes out, it dries and forms long nanofibers that are deposited on the glass slide as a mat. The researchers then spin this nanofiber mat directly on the surface of an ultraviolet LED to make a white-light emitter.

During the fiber-spinning process, the two different dye molecules automatically attach themselves to two different locations on the DNA. The researchers have found in previous work that the nanofiber mats produce 10 times brighter light than thin films of the dye-containing DNA.

"It's really very cool [work], and I think that it has practical promise," says Aaron Clapp, a professor of chemical and biological engineering at Iowa State University. "[But] it seems like an overly dramatic way of doing it."

Clapp speculates that instead of relying on energy transfer between the two fluorescent dyes, you could just change their ratios and get the colors you want.

However, each dye would then require a different input energy source as opposed to just one UV source, Sotzing points out. What's more, energy transfer between two dyes gives better control over the color of the output light.

Walt says that it may be possible to use the first dye to transfer energy to multiple dyes and get an even wider range of colors. "The results reported here suggest DNA-[energy transfer] light emitters are promising," Walt says, "but the ultimate utility will depend on factors such as lifetime and power efficiency."



80
Sains / Brain Surgery Using Sound Waves
« on: 22 July 2009, 02:06:20 PM »
source : http://www.technologyreview.com/biomedicine/23031/

Brain Surgery Using Sound Waves

A revolutionary new approach to neurosurgery avoids both radiation and a scalpel.

By Emily Singer



A new ultrasound device, used in conjunction with magnetic resonance imaging (MRI), allows neurosurgeons to precisely burn out small pieces of malfunctioning brain tissue without cutting the skin or opening the skull. A preliminary study from Switzerland involving nine patients with chronic pain shows that the technology can be used safely in humans. The researchers now aim to test it in patients with other disorders, such as Parkinson's disease.

Quote

Sound surgery: A patient about to undergo neurosurgery lies outside a magnetic resonance scanner with his head inside the ultrasound device.
Credit: University Children's Hospital Zurich

Multimedia
video     http://www.technologyreview.com/video/?vid=395 See how focused ultrasound surgery works.


"The groundbreaking finding here is that you can make lesions deep in the brain--through the intact skull and skin--with extreme precision and accuracy and safety," says Neal Kassell, a neurosurgeon at the University of Virginia. Kassell, who was not directly involved in the study, is chairman of the Focused Ultrasound Surgery Foundation, a nonprofit based in Charlottesville, VA, that was founded to develop new applications for focused ultrasound.

High-intensity focused ultrasound (HIFU) is different from the ultrasound used for diagnostic purposes, such as prenatal screening. Using a specialized device, high-intensity ultrasound beams are focused onto a small piece of diseased tissue, heating it up and destroying it. The technology is currently used to ablate uterine fibroids--small benign tumors in the uterus--and it's in clinical testing for removing tumors from breast and other cancers. Now InSightec, an ultrasound technology company headquartered in Israel, has developed an experimental HIFU device designed to target the brain.

The major challenge in using ultrasound in the brain is figuring out how to focus the beams through the skull, which absorbs energy from the sound waves and distorts their path. The InSightec device consists of an array of more than 1,000 ultrasound transducers, each of which can be individually focused. "You take a CT scan of the patient's head and tailor the acoustic beam to focus through the skull," says Eyal Zadicario, head of InSightec's neurology program. The device also has a built-in cooling system to prevent the skull from overheating.

The ultrasound beams are focused on a specific point in the brain--the exact location depends on the condition being treated--that absorbs the energy and converts it to heat. This raises the temperature to about 130 degrees Fahrenheit and kills the cells in a region approximately 10 cubic millimeters in volume. The entire system is integrated with a magnetic resonance scanner, which allows neurosurgeons to make sure they target the correct piece of brain tissue. "Thermal images acquired in real time during the treatment allow the surgeon to see where and to what extent the rise in temperature is achieved," says Zadicario.

The Swiss study, published this month in the Annals of Neurology, tested the technology on nine patients with chronic debilitating pain that did not respond to medication. The traditional treatment for these patients is to use one of two methods to destroy a small part of the thalamus, a structure that relays messages between different brain areas. Surgeons either use radiofrequency ablation, in which an electrode is inserted into the brain through a hole in the skull, or they use focused radiosurgery, a noninvasive procedure in which a focused beam of ionizing radiation is delivered to the target tissue. Zadicario says HIFU has advantages over radiosurgery because the effects of killing tissue with radiation can take weeks to months, whereas the thermal approach is immediate. Adds Kassell, "The precision and accuracy [are] considerably greater with ultrasound, and it should be in principle safer in the long run."

Quote


Brain feedback: Focused ultrasound beams heat a target in the brain, while real-time images captured by the scanner give the neurosurgeon immediate feedback on the procedure.
Credit: University Children's Hospital Zurich

According to the new study, all nine patients reported immediate pain relief after the outpatient procedure and were up and about soon afterward. "Two patients had a glass of Proseco [wine] with us," says Ernst Martin, director of the Magnetic Resonance Center at the University Children's Hospital Zurich and lead author of the study. The patients did report feeling a few seconds of tingling or dizziness, and in one case a brief headache, as the targeted tissue heated up, he says. But none experienced neurological problems or other side effects after surgery.

"This will give a lot of impetus for manufacturers of focused ultrasound equipment to get interested in the brain," says Kassell. An experimental version of InSightec's ultrasound device is currently being tested in five medical centers around the globe. In addition to using it with Parkinson's patients and those who suffer other movement disorders, scientists plan to test the technology as a treatment for brain tumors, epilepsy, and stroke.

One downside of HIFU compared to the more invasive neurosurgeries performed with an electrode is that surgeons are unable to functionally test whether they have targeted the correct part of the brain. During traditional surgery for Parkinson's, for example, the neurosurgeon stimulates the target area with the electrode to make sure he or she has identified the piece of the brain responsible for the patient's motor problems, and then kills that piece of tissue.

"Not every functional neurosurgeon will accept this [new approach], because you cannot do a test before the lesion is made," says Ferenc Jolensz, director of the Division of MRI and Image Guided Therapy Program at Brigham and Women's Hospital in Boston. Jolensz and collaborator Seung-Schik Yoo are developing ways to use HIFU to modulate brain activity in a localized area, which would enable functional testing of the target area before it is destroyed. Jolensz is also studying HIFU for brain surgery and has tested the technology on four patients with brain tumors, though the results have not yet been published.

81
Seremonial / Selamat Hari Raya Asadha 2009
« on: 21 July 2009, 05:40:25 PM »
Hari suci Asadha memperingati tiga peristiwa penting, yaitu :

- Khotbah pertama Sang Buddha kepada lima orang pertapa di Taman Rusa Isipatana.
- Terbentuknya sangha Bhikkhu yang pertama.
- Lengkapnya Tiratana/Triratna ( Buddha, Dhamma, dan Sangha ).


Quote
Ketika Buddha masih tidak berkeinginan untuk berusaha mengajarkan Dhamma, Mahàbrahmà Sahampati berpikir, “Nassati vata bho loko! Vinassati vata bho loko!” “O teman, dunia akan binasa! O teman, dunia akan binasa!” Buddha yang layak mendapat penghormatan oleh dewa dan manusia karena telah menembus pengetahuan semua Dhamma di dunia tidak sudi mengajarkan Dhamma!” Kemudian dalam sekejap, dengan kecepatan bagaikan seorang kuat yang merentangkan tangannya yang terlipat atau melipat tangannya yang terentang, Brahmà Sahampati lenyap dari alam brahmà bersama-sama dengan sepuluh ribu Mahàbrahmà lainnya, muncul di hadapan Buddha.

Pada waktu itu, Mahàbrahmà Sahampati meletakkan selendangnya (selendang brahmà) di bahu kirinya dan berlutut dengan lutut kanannya menyentuh tanah (duduk cara brahmà). Bersujud kepada Buddha dengan mengangkat kedua tangannya yang dirangkapkan dan berkata:

“Buddha yang agung, sudilah Buddha mengajarkan Dhamma kepada semua makhluk, manusia, dewa, dan brahmà. Buddha agung yang memiliki bahasa yang baik, sudilah Buddha mengajarkan Dhamma kepada semua makhluk, manusia, dewa, dan brahmà. Ada banyak makhluk-makhluk yang memiliki sedikit debu kotoran di mata pengetahuan dan kebijaksanaan mereka. Jika makhluk-makhluk ini tidak berkesempatan mendengarkan Dhamma Buddha, mereka akan menderita kerugian besar karena tidak memperoleh Dhamma yang luar biasa Magga-Phala yang layak mereka dapatkan. Buddha yang mulia, akan terbukti bahwa ada dari mereka yang mampu memahami Dhamma yang Engkau ajarkan.”

Kemudian lagi, setelah mengucapkan dengan bahasa prosa biasa, Mahàbrahmà juga mengajukan permohonan dalam syair seperti berikut:

“Buddha yang agung, pada masa lampau sebelum kemunculan-Mu, di Negeri Magadha, terdapat ajaran salah yang tidak suci, yang diajarkan oleh enam guru berpandangan salah, seperti Påraõa Kassapa yang dinodai oleh lumpur kotoran. Dan oleh karena itu, sudilah membuka pintu gerbang Magga untuk memasuki Nibbàna yang abadi (yang tertutup sejak lenyapnya ajaran Buddha Kassapa). Izinkan semua makhluk mendengarkan Dhamma Empat Kebenaran Mulia yang terlihat jelas oleh-Mu yang bebas dari debu kilesa.

“Buddha yang mulia dan bijaksana, yang memiliki mata kebijaksanaan yang mampu melihat segala sesuatu! Bagaikan seorang yang memiliki pandangan mata yang tajam berdiri di puncak gunung dan melihat semua orang di sekelilingnya, demikian pula Engkau, Buddha yang mulia, karena telah terbebas dari kesedihan, naik ke menara Pa¤¤à dan melihat semua makhluk, manusia, dewa, dan brahmà, yang terjatuh ke dalam jurang kesedihan (karena dilindas oleh kelahiran, usia tua, penyakit, dan kematian, dan lain-lain).

“Buddha yang mulia dan memiliki kecerdasan, yang hanya mengetahui kemenangan, tidak pernah kalah, dalam semua pertempuran! Bangunlah! Buddha yang mulia, yang bebas dari hutang kenikmatan indria, yang memiliki kebiasaan membebaskan makhluk-makhluk yang ingin mendengarkan dan mengikuti ajaran Buddha, dari perjalanan sulit berupa kelahiran, usia tua, penyakit, dan kematian dan bagaikan pemimpin rombongan, yang mengantar mereka dengan selamat menuju Nibbàna! Sudilah, mengembara di dunia ini dan mengumandangkan Dhamma dari Buddha yang agung, sudilah, mengajarkan Empat Kebenaran Mulia kepada semua makhluk manusia, dewa, dan brahmà. Buddha yang mulia, ada makhluk-makhluk yang dapat melihat dan memahami Dhamma yang Engkau ajarkan.”

Quote
Setelah merenungkan dan melihat, Buddha memberikan persetujuan kepada Mahàbrahmà Sahampati dalam syair berikut:

Apàrutà tessaÿ amatassa dvàra;
Ye sotavanto pamuncantu saffhaÿ.
Vihiÿsasa¤¤i paguõaÿ na bhàsim;
Dhammaÿ paõitaÿ manujesu Brahme.

O Mahàbrahmà Sahampati, Aku tidak menutup pintu Magga bagi para dewa dan manusia untuk memasuki Nibbàna Abadi dan mencapai Kebebasan. (Pintu itu senantiasa terbuka). Semoga dewa dan manusia yang memiliki pendengaran yang baik (sotapasàda) memperlihatkan keyakinan terhadap-Ku.

 _/\_

82


http://www.hannibalrising.com






from imdb http://www.imdb.com/title/tt0367959/

Quote
Mischa and Hannibal, baby brother and sister, are inseparable; it is their love for each other that ties their bond. Their companionship is forever binding, until, with their family, while hiding from the Nazi war machine a twisted set of circumstance sets the pace for a most vicious attack on the future of one Hannibal Lecter for the sworn vengeance for the brutal killing of his baby sister. Years later, we find Hannibal, the teenager, setting up in Paris, and living with his aunt Lady Murasaki Shikibu and studying at medical school here he finds his forte. Still searching for his sister's murderers, still bitter and still ever hopeful of satisfying his desire for retribution. This chance arrives, and soon we are to learn that for a pound of flesh lost a pound of flesh must be repaid. This is the horrific tale of justice and honor, a young man's growing pains that will have the guilty paying with more than just flesh and bone. This is the up and rising tale of the young Hannibal, prey you do not meet him, for meat you shall be to him. Taste his wroth.  Written by Cinema_Fan

This is the story of the monster Hannibal Lecter's formative years. These experiences as a child and young adult led to his remarkable contribution to the fields of medicine, music, painting and forensics. We begin in World War II at the medieval castle in Lithuania built by Dr. Lecter's forebear, Hannibal the Grim. The child Hannibal survives the horrors of the Eastern Front and escapes the grim Soviet aftermath to find refuge in France with the widow of his uncle, mysterious and beautiful Japanese descended from Lady Murasaki Shikibu, author of the Tale of Genji. Her kind and wise attentions help him understand his unbearable recollections of the war. Remembering, he finds the means to visit the outlaw predators that changed him forever as they battened on helpless during the collapse of the Eastern Front. Hannibal helps these war criminals toward self-knowledge even as we see his own nature become clear to him. Written by Bloody-Disgusting.com

Based on Thomas Harris' book of the same name, this prequel shows a young Hannibal Lecter in three different phases of his life from childhood in Lithuania to his ten years in England up to his time in Russia before his capture by FBI agent Will Graham in Red Dragon. Written by Ankofae


:jempol:

83
Facebook hits a quarter billion users




Least surprising news of the day: Facebook has officially grown to 250 million active users across the world, according to a post on the company blog by CEO Mark Zuckerberg.

"For us, growing to 250 million users isn't just an impressive number; it is a mark of how many personal connections all of you have made, and how far we at Facebook have to go to extend the power of connection to the billions of people around the world," Zuckerberg wrote. (The post is accompanied by an animation of how Facebook's growth spread around the world, which is pretty cool.)

Facebook announced that it had reached 200 million members barely over three months ago. Then, Facebook commemorated the occasion with the launch of a new nonprofit-focused initiative, Facebook for Good. This time, they're not launching anything fancy, just assuring members that they're continuing to develop and innovate.

"Today as we celebrate our 250 millionth user, we are also continuing to develop Facebook to serve as many people in the world in the most effective way possible," Zuckerberg wrote. "This means reaching out to everyone across the world and making products that serve all of you, wherever you are--whether through Facebook Connect, new mobile products and the other things that we are building."

Interesting that he specifically mentioned mobile development. Facebook's growth explosion as of late has been largely overseas, and some would argue that the next frontier for the massive social network would be to make better inroads into countries where people are more likely to be accessing the Web on a mobile device than on a computer.

Facebook Connect, which lets external sites use Facebook login credentials and some profile data, has been one of the company's most high-profile projects since debuting about a year ago. It's also been a big success, with some reports that the company may build a powerful advertising network around it.

And "other things" likely entail the social network's virtual currency system, a potentially lucrative product that was finally announced after much speculation but has yet to make any kind of formal debut or rollout.

It took about four months for Facebook to go from 150 million to 200 million members, and slightly longer than that for it to grow from 100 million to 150 million.

Also making Facebook-related milestones this week: "The Accidental Billionaires," the factually questionable account of the social network's early days at Harvard, debuted in bookstores on Tuesday and had cracked Amazon's top-100 ranking by the end of the day.

copied from http://news.cnet.com/8301-13577_3-10287336-36.html

84
Gadget dan Toys / ASK - Usulan Notebook
« on: 17 July 2009, 09:33:15 AM »
Rekan2..

i minta usul donk... mo nyari lappie yg

- kisaran harga 5-6 juta
- dengan layar 14 inch
- Ringan


terus mo nanya tambahan lagi  ;D ;D

 i tertarik dengan notebook ini ASUS K40IN T4200 , review nya  ada disini http://dhammacitta.org/forum/index.php/topic,11590.0.html  by om markos...

utk notebook diatas ada contendernya gak, secara i tertarik Asus itu dengan VGA dedicatednya yg 512  8) dan FSB cukup juga sekitar 800 MHz

trims sebanyak2nya  ^:)^

85
Mo nanya, regulasi forum ney buat member yg mao post acara2 pengumpul dana kek gitu ada gak ya  :-?


terus prosesnya gmana?

86
Kaki Lima / mau beli - PS2 seken
« on: 09 July 2009, 12:54:51 PM »
sodara sodari , kalo ada yg jual ps2 seken silaken pm i yak...

rekan kerja ada yg nyari neh... kalo misal ada yg tau tempat jual kek gituan juga boleh....

trims...

87

 <:-P <:-P <:-P <:-P <:-P <:-P <:-P <:-P

Cide ultah...

3 Juli 1993

 <:-P <:-P <:-P <:-P <:-P <:-P <:-P <:-P

88
berhubung laptop i yg rusak, mo experimen testing  ;D

so i mo testing processornya masih oke gak, nah berhubung gakada laptop lagi yg bisa dijadikan alat coba,

bisa gak Pentium mobile di laptop dijalankan di komputer soket 478....  (i liat spec proc laptop banias 1,6 Ghz keknya soket 478 deh... )

89
Kaki Lima / mo beli Pentium Centrino
« on: 18 June 2009, 01:38:57 PM »
Haloo,

laptop i gak bisa nyala lagi.... bis teken tombol power, cuman ada aktivitas fan doang, LCD nya kagak nyala sama sekali.......

kekna  kena processornya... so mo tanya ada yg jual gak? en boleh coba dulu gak ;D  sapa tau bukan processornya yg rusak.... :P

ini daftar paketan processornya

Intel Pentium M (Banias) processor, 1.7-GHz
Intel Pentium M (Banias) processor, 1.6-GHz
Intel Pentium M (Banias) processor, 1.5-GHz
Intel Pentium M (Banias) processor, 1.4-GHz
Intel Pentium M (Dothan) processor, 1.5 GHz
Intel Pentium M (Dothan) processor, 1.6 GHz
Intel Pentium M (Dothan) processor, 1.7 GHz
Intel Pentium M (Dothan) processor, 1.8 GHz
Intel Pentium M (Dothan) processor, 2.0 GHz


nah yg dibold itu yg processor i sekarang, rencana mo ganti yg dothan 2 Ghz.....

90
Humor / [Video] Funny animal Clips
« on: 13 June 2009, 05:31:05 PM »
ikutan copas ahhhhh  :whistle:


Pages: 1 2 3 4 5 [6] 7 8 9 10 11 12 13