ok

This is default featured post 1 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured post 2 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured post 3 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured post 4 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured post 5 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

30/01/11

wikileaks jakarta cable embassy

Viewing cable 10JAKARTA186, MISSION INDONESIA FUNDING REQUEST TO AMPLIFY SOCIAL MEDIA EFFORT IN TIME FOR MARCH POTUS VISIT

If you are new to these pages, please read an introduction on the structure of a cable as well as how to discuss them with others. See also the FAQs
Reference ID Created Released Classification Origin
10JAKARTA186 2010-02-12 05:05 2011-01-19 11:11 UNCLASSIFIED Embassy Jakarta
VZCZCXYZ0003
RR RUEHWEB

DE RUEHJA #0186 0430503
ZNR UUUUU ZZH
R 120503Z FEB 10
FM AMEMBASSY JAKARTA
TO SECSTATE WASHDC 4468
UNCLAS JAKARTA 000186

SIPDIS
DEPARTMENT FOR R/PPR, IIP/EAP, EAP/PD 
INFO FOR PA, PA/OBS, EAP/MTS, S/P FOR JARED COHEN, IIP FOR DAN SREEBNY 
NSC FOR PRADEEP RAMAMURTHY WHITE HOUSE FOR KATIE LILLIE

E.O. 12958: N/A 
TAGS: PGOV KPAO ID XC
SUBJECT: MISSION INDONESIA FUNDING REQUEST TO AMPLIFY SOCIAL MEDIA EFFORT IN TIME
FOR MARCH POTUS VISIT

REF: Jakarta 0065

1. Action/funding request in paragraph 5. 2. Summary: Mission Indonesia requests 
immediate additional funding to use new media and social networking tools maximize
online outreach for the POTUS visit scheduled for late March, 2010. Already the 
leading U.S. Mission in the world on Facebook with nearly 50,000 “fans,” and one 
of the leading Missions using Twitter, YouTube and engaging local bloggers to 
promote USG messages and information, we are uniquely positioned to use these 
tools to amplify key topics and themes to support the upcoming visit by President 
Obama. We request $100,000 in funding from R to boost our Facebook fan page 
membership to 1 million, and can accomplish this in 30 days. End Summary. 
Proven PD 2.0 Expertise

3. U.S. Mission Indonesia is on the forefront of Public Diplomacy 2.0. With over 
50,000 fans the most of any diplomatic mission worldwide, are gaining traction 
using social media in Indonesia for PD. Our efforts were recently cited in an 
article on CNET Asia, as “a great example of social media interaction in Indonesia,
” prompting the author to wonder “how long will it take for other organizations 
and businesses to do the same?” We also have our own YouTube channel with over 300 
videos, almost 1,000 followers on Twitter, and -- for the last two years -- engaged
positively with thousands of country’s most influential bloggers.

Indonesia’s New Media Environment

4. Indonesia’s internet community is emerging, but recently has become a powerful
political force for reform (reftel). With roughly 10% of the population able to 
access the internet at least monthly, this represents over 25 million people,
nearly half of whom are on Facebook. In the seventh-largest and one of the 
fastest-growing Facebook markets in the world, we will directly reach a young, 
urban population which does not rely on traditional media as information sources. 
In addition, Indonesians’ special connection to the 44th President means that 
interest in the visit is incredibly high. Our Facebook post announcing the visit 
had interactions and comments from over 1,000 people in less than two weeks, 
and stories about the visit appeared in the media every day since the official 
announcement.

5. By actively connecting the POTUS visit to our new media efforts, we have a 
unique chance to build a sustainable online platform to engage Indonesians on
USG issues and messages long after the visit. With enough funding to properly 
amplify and build on past our successes, Mission Indonesia requests $100,000 
immediately in order reach a goal of 1 million Facebook fans in just 30 days --
just before POTUS visit.

Action Plan and Implementation

6. This money would be used in three areas. First, it would increase direct 
advertising via Facebook. Currently, Embassy Jakarta spends less than $25 per day 
on advertising, and nets between 300-400 new fans daily. Increasing this tenfold
over 30 days, results in a gain of 100,000 to 120,000 fans. The funds would also
be used to promote the visit and our fan page as the place to learn more by 
extensively advertising on Indonesian online portals, banner ads, YouTube, 
Twitter, and other promotional efforts, including embedding bloggers, contests 
and giveaways, and using SMS technology. With over 100 million mobile phone 
users in Indonesia, texting is a powerful way to include a huge audience. 
Partnering with a major telecom provider, we can encourage Indonesians to 
sign up for real-time updates via their cell phone -- a great way to reach 
those not yet online about the visit. Cost: $60,000.

7. Another key promotion strategy to generate interest will be offering a “golden 
ticket” via Facebook. We propose making a dream come true for one lucky Indonesian,
by providing an opportunity to meet POTUS during his visit. If the White House 
approves, we could invite fans to post why they should meet President Obama, and 
in doing so, use our social media platform to connect fans to the visit, as well 
as build excitement beforehand and follow-up coverage afterwards. In addition, 
we could partner with a local TV station to have a “finalist” show and increase 
coverage. RSO would ensure any winner(s) are vetted for security issues. If the 
White House would not agree to this, an alternate “dream prize” might be an 
educational trip to the U.S. Cost: $15,000.

8. Third, in order to implement these ideas in this limited time-frame, 
we need short-term expert help on this promotion in the form of a qualified 
local digital marketing agency, who could assist the Embassy’s new media team 
(currently one officer and three FSNs working on it part-time). Cost: $25,000.

OSIUS
 
what the same :) 

Sourceforge Attack: Full Report


Sourceforge Attack: Full Report

Posted on Saturday, January 29th, 2011 by admin
Category: General
As we’ve previously announced, SourceForge.net has been the target of a directed attack. We have completed the first round of analysis, and have a much more solid picture of what happened, the extent of the impact, our plan to reduce future risk of attack. We’re still working hard on fixing things, but we wanted to share what we know with the community.
We discovered the attack on Wednesday, and have been working hard to get things back in order since then. While several boxes were compromised we believe we caught things before the attack escalated beyond its first stages.
Our early assessment of which services and hosts were impacted, and the choice to disable CVS, ishell, file uploads, and project web updates appears to have prevented any further escalation of the attack or any data corruption activities.
We expect to continue work on validating data through the weekend, and begin restoring services early next week. There is a lot of data to be validated and these tests will take some time to run.  We’ll provide more timeline information as we have more information.
We recognize that we could get services back online faster if we cut corners on data validtion. We know downtime causes serious inconveniences for some of you. But given the negative consequences of corrupted data, we feel it’s vital to take the time to validate everything that could potentially have been touched.

Attack Description

The general course of the attack was pretty standard. There was a root privilege escalation on one of our platforms which permitted exposure of credentials that were then used to access machines with externally-facing SSH. Our network partitioning prevented escalation to other zones of our network.
This is the point where we found the attack, locked down servers, and began work on analysis and response.

Immediate Response

Our first action response included many of the standard steps:
* analysis of the attack and log files on the compromised servers
* methodically checking all other services and servers for exploits
* further network lockdown and updating of server credentials

Service shutdown

Once we knew the attack was present, we locked down the impacted hosts, so that we could reduce the risk of escalation, from those servers to other hosts, and prevent possible data gathering activities.
This strategy resulted in service downtime for:
* CVS Hosting
* ViewVC
* New Release upload capability
* ProjectWeb/shell

Password invalidation

Our analysis uncovered (among other things) a hacked SSH daemon, which was modified to do password capture. We don’t have reason to the attacker was successful in collecting passwords. But, the presence of this daemon and server level access to one-way hashed, and encrypted, password data led us to take the precautionary measure of invalidating all SourceForge user account passwords. Users have been asked to recover account access by email.

Data Validation

It’s better to be safe than sorry, so we’ve decided to perform a comprehensive validation of project data from file releases, to SCM commits. We will compare data agains pre-attack backups, and will identify changed and added. We will review that data, and will will also refer anything suspicious to individual project teams for further assessment as needed.
The validation work is a precaution, because while we don’t have evidence of any data tampering, we’d much prefer to burn a bunch of CPU cycles verifying everything than to discover later that some extra special trickery lead to some undetected badness.

Service Restoration

Now that most of the analysis is done, we’ve started the next stage of our efforts, which includes the obvious work of restoring compromised boxes from bare metal, and implementing a number of new controls to reduce likelihood of future attack.
We will of course also be updating the credentials which reside on these hosts and performed quite a few steps to further lock down access to these machines.
We are in process of bringing services back one by one, as data validation is completed, and we get the newly configured hosts online. We expect that data validation will progress through the weekend, and we’ll really start getting swinging on service restoration early next week.

File Release Services

Many folks have suggested that the most likely motivation for an attack against sourceforge would be to corrupt project releases.
We’ve found no evidence of this, but are taking extrodinary care to make sure that we don’t somehow distribute corrupted release files.
We are performing validation of data against stored hashes, backups, and additional data copies.
We expect to restore these services first, as soon as data validation is completed.

Project Web

One attack vector that impacts our services directly is the shared project web space. So, let’s talk about that in a bit more detail.
Sourceforge.net has been around a long time, and security decisions made a decade ago are now being reassessed. In most cases past decisions were made around the general principle that we trust open source developers to work together, play nice, and generally do the right thing. Services were rolled out based on widespread trust for the developer community. And that philosophy served us well.
But in the years since then, we’ve evolved from hundreds of sf.net users to millions, and in many cases it’s time to re-asses the balance between widespread trust and security. Project Web is a prime example of this, and we’ve been working at a deliberate pace to isolate project web space, and have begun rolling out the new “secure project web” service to many of our projects.
This new secure project web includes a new security model that moves us away from shared hosting while preserving the scalability we need for mass hosting.
Because of this attack we’ll be accelerating the rollout of Secure Project Web services as part of the process of bringing the project web service back online. This will allow us to provide both improved functionality, and better secruity.

CVS

CVS service is one of SourceForge.net’s oldest services and, due to limitations in CVS itself, cannot readily live on our scalable network storage solution. Validation of this data is going to require several days and we anticipate that this service will be restored sometime in the later part of week.
We are also considering the end-of-life of the CVS service and hope to have user support in migrating CVS users to Subversion in coming months. Subversion generally provides parity to CVS commands, and many of our users have made this transition successfully in the past.
From SVN, projects can move to Git if desired.

Looking forward

We are very much committed to the ongoing process of improving our security, and we will continue making behind the scenes improvements to our infrastructure on a regular basis. This isn’t a one time event, it’s a process, and we’re going to stay fully engaged over the long term.
I’d like to end with a more personal note, I’ve been working with our Ops team a lot this week, and I think we can all say that the patience and support that we’ve received from the community has been the best part of a very bad week.
Thanks again for all the support and encouragement.

MY SITE RECOMEND

Hello this is my new site if you wont visit  go here
my web







This is an search engine with another function.If you want seo tools you cant go to

seo tools    and this for  proxy surfing    try it 
   or you can try this submiter for your  blog or site submit . I hope this give you more colection of search engine and SEO tools .Thanks and enjoyed :).

25/01/11

Multi-Core Scaling In A KVM Virtualized Environment

Earlier this week we published benchmarks comparing Oracle VM VirtualBox to Linux KVM and the Linux system host performance. Some of the feedback from individuals said that it was a bad idea using the Intel Core i7 "Gulftown" with all twelve of its CPU threads available to the hardware-virtualized guest since virtualization technologies are bad in dealing with multiple virtual CPUs. But is this really the case? With not being able to find any concrete benchmarks in that area, we carried out another set of tests to see how well the Linux Kernel-based Virtual Machine scales against the host as the number of CPU cores available to each is increased. It can be both good and bad for Linux virtualization.
This series of tests was again carried out on the Intel Core i7 970 "Gulftown" system with its six physical cores plus Hyper Threading to provide a total count of 12 threads. While Intel’s next-generation products will soon outdo this CPU, the i7 970 has a base frequency of 3.2GHz and a turbo frequency of 3.46GHz. There is 12MB of "Smart Cache" between the cores, support for SSE 4.2, and the latest Intel Virtualization Technology capabilities for providing the best Linux virtualization experience.
The motherboard was still the ASRock X58 SuperComputer, since from its BIOS it allows manipulating the number of enabled CPU cores as well as Hyper Threading, which allows us to easily adjust the number of cores during the testing process. We previously used this for looking at the LLVMpipe scaling performance with the same Intel CPU. Other hardware included 3GB of DDR3 system memory, 320GB Seagate ST3320620AS HDD, and a NVIDIA GeForce GTX 460 graphics card.
For the tests published earlier this week we used Ubuntu 10.10, however, at the request of Red Hat's virtualization group, we switched to Fedora 14 for this testing to represent more a more recent and proper KVM virtualization experience. Fedora 14 x86_64 has the Linux 2.6.35 kernel, GNOME 2.32.0, X.Org Server 1.9.0, GCC 4.5.1, and an EXT4 file-system. Fedora 14 was used on both the host and guest virtualized instance.
To look at the multi-core virtualization performance we tested the system host and KVM virtualized instance when available were 1, 2, 4, 6, and 12 cores. All except for the 12 core testing was done when simply enabling the respective number of CPU cores on the Core i7 970 and then with 12 cores, all CPU cores were enabled plus flipping on Hyper Threading.
The SMP virtual test suite available within Phoronix Test Suite 3.0 "Iveland" was used as our battery of CPU-focused benchmarks to see how well virtualized guests perform and scale to  multiple cores. These tests include Apache, a timed compilation process of Apache, C-Ray, CLOMP, 7-Zip compression, PBZIP2 compression, GraphicsMagick, HMMer, NASA NAS Parallel Benchmarks, Smallpt, TTSIOD Renderer, and x264. Now let us see whether "Virtualization is known to work badly with virtual CPUs" is fact or fiction!


When starting with the Apache web-server benchmark, the KVM virtualized guest does perform very poorly against the system's host performance. The performance between the two instances were close when only one CPU core was enabled, while the system's host performance went up linearly until hitting four cores and from there began to flatten out in this web benchmark. The KVM instance meanwhile was flat the entire time and did no scaling at all with the CPU core count.
Our Apache web-server benchmark was a bit shocking with the KVM guest not changing at all, but when moving onto other tests it was a different story. With the timed Apache compilation, the guest was expectedly slower than the host was, but it scaled at each step of the way to the same abilities as the host operating system. This is an example of the virtualization scaling performance being done well.
C-Ray is one of the test profiles where our earlier virtualization tests have shown its performance to be nearly at the same level as the host. Today's tests show that not only this ray-tracing software runs the same as the Linux host with one core enabled or with all cores enabled, but at each step of the way too. There is no overhead of the virtualized guest due to the number of "virtual CPUs" within the KVM guest.
CLOMP is one of the newest test profiles to OpenBenchmarking.org / Phoronix Test Suite 3.0 and it is a government test looking at the OpenMP efficiency across multiple cores. The CLOMP test shows the host and guest speeding up the same up until four cores are hit. Once enabling six or twelve cores, the virtualized guest was much less efficient than the host.

It is a similar story with the 7-Zip compression benchmark where the efficiency of the guest begins to deviate from the host when having more than four CPU cores to tap.
The Parallel BZIP2 test did not illustrate this problem and was like the C-Ray results where it scaled very well with the increasing core count.
With the OpenMP-powered GraphicsMagick resizing test, the virtualized guest performance actually dropped when six and twelve cores were enabled.
With the image sharpening operation in GraphicsMagick, at least the performance didn't degrade when going beyond four cores, but it wasn't as fast as the system host.

For both the system host and KVM guest, the CG.B test in NPB did not take advantage of more than four cores.
In the NPB EP.B test, however, it did continue scaling up to 12 threads.
With LU.A, the performance dropped off for the KVM guest after four cores.
Running in a virtualized environment, regardless of the core count, minimally affects Smallpt, like C-Ray.

The TTSIOD 3D Renderer results for the KVM Fedora 14 guest were interesting and similar to one of the GraphicsMagick results from earlier where having six or twelve cores available to the guest instance had negatively affected the performance. This was to the point that having 12 cores available to the guest running TTSIOD was at the same speed as having one core available, while four cores was the sweet spot running more than twice as fast. This is while the host Fedora 14 continued taking advantage of the extra threads on the Intel Core i7.
Lastly, with the x264 media encoding benchmark, with one and two cores enabled the performance was close between the host and guest, but the VT-x virtualized guest began to stray as the core count increased.
"Virtualization is known to work badly with virtual CPUs." So is that a fact? Not entirely. There are some cases in the results published today where the KVM guest didn't scale too well after a certain point, but there are also cases where the Kernel-based Virtual Machine guest running Fedora 14 had no problems running to the same speed as the Fedora 14 through the available 12 threads on the Intel Core i7 970 processor. It was really a mixed bag, but regardless, there is always room to optimize the Linux virtualization performance in a multi-threaded environment. This though could potentially be a greater issue as CPUs continue gaining more processing cores.



Intel Core i5 2500K Linux Performance

Published on January 24, 2011
Written by Michael Larabel
Page 1 of 9
Compare Prices
Find More Intel Articles
Discuss This Article



Earlier this month Intel released their first "Sandy Bridge" processors to much excitement. However, for Linux users seeking to utilize the next-generation Intel HD graphics found on these new CPUs, it meant problems. Up to this point we have largely been looking at the graphics side of Sandy Bridge, and while we have yet to publish any results there due to some isolated issues, on the CPU side its Linux experience and performance has been nothing short of incredible. Here are the first Linux benchmarks of the Intel Core i5 2500K processor.
The Core i5 2500K is one of the Intel Sandy Bridge processors to launch earlier this month and it's a quad-core part without Hyper Threading that is clocked at 3.3GHz but has a maximum Turbo Frequency of 3.7GHz. The Core i5 2500K is equipped with 6MB of Intel Smart Cache, supports SSE 4.1 / SSE 4.2 and the new AVX extensions, is manufactured on a 32nm process like the other Sandy Bridge CPUs, and has a maximum TDP of 95 Watts. Its current retail price is just above $200 USD.
As we had not even received this Intel Core i5 CPU until days after its launch, chances are you are already well familiar with the Sandy Bridge micro-architecture from the other publications that received the processors in advance. With that said, in this article we will thus focus upon our primary interest and that is the Linux support and performance.
Aside from the problems we and others have encountered concerning the support using the integrated graphics, the rest of our Sandy Bridge Linux experience has been nothing but phenomenal. There has been no issues of encountering kernel panics or other odd behavior like we have experienced on some select instances in the past when utilizing brand new CPUs under Linux. The new Intel chipsets required for Sandy Bridge support, which right now are the H67 and P67, are also playing well with modern Linux distributions.
So far there's been three Sandy Bridge motherboards tested at Phoronix and they have all worked just fine with Linux aside from the usual caveat of LM_Sensors not supporting the motherboard's sensors and also with USB 3.0 support at times being finicky.
For today's Core i5 2500K benchmarking under Linux it was done with Ubuntu 10.10 using the stock components like GNOME 2.32.0, X.Org Server 1.9.0, GCC 4.4.5, and an EXT4 file-system, but installing a vanilla Linux 2.6.37 kernel for ensuring the most recently declared stable Sandy Bridge code. The i5 2500K was tested with an ASRock P67 Pro3 motherboard having 2GB of OCZ DDR3-1333MHz memory, an OCZ 60GB Vertex 2 SSD, and a NVIDIA GeForce GTX 460 768MB graphics card. The binary NVIDIA 260.19.29 driver was used with the GeForce GTX 460 graphics card under Linux.
This processor was tested not only at its stock 3.3GHz / Turbo 3.7GHz speed but also when overclocked to 4.00GHz and then again when it was overclocked to 4.20GHz. The processors we had available for comparison in this testing were an Intel Core i5 750 (2.67GHz Quad-Core), Intel Core i7 870 (2.67GHz Quad-Core + Hyper-Threading), Intel Core i7 920 (2.67GHz Quad-Core + Hyper-Threading), and Intel Core i7 970 (3.20GHz Six-Core + Hyper-Threading). Besides switching out the CPUs, the other principal components remained the same except for also having to switch out the motherboards for socket/chipset differences. The i5 750 and i7 870 were used in conjunction with the ECS P55H0A motherboard while the i7 920 and i7 970 had the ASRock X58 SuperComputer.
Via the latest Phoronix Test Suite 3.0 "Iveland" and OpenBenchmarking.org code we ran the following test profiles across this spectrum of Intel Core processors under Linux: World of Padman, 7-Zip, Parallel BZIP2 Compression, Himeno, Bullet, C-Ray, POV-Ray, Smallpt, HMMer, Minion, NAS Parallel Benchmarks, timed Apache compilation, timed Linux kernel compilation, CLOMP, OpenSSL, x264, PostgreSQL, and Apache
The World of Padman game is not particularly exciting as a benchmark for modern CPUs, but as ioquake3-based games remain popular with many Linux users, its results were included. These games are CPU bound, especially with a GeForce GTX 460 graphics card using the proprietary NVIDIA driver. When looking at the performance of the Core i5 2500K with World of Padman at 1920 x 1080, its performance was slightly elevated above the stock Intel Core i7 970 "Gulftown" CPU and then obviously additional gains in the frame-rate were made when overclocking the Sandy Bridge hardware. In reality though these gains are not too beneficial because even with an Intel Core i5 750 the frame-rate at this resolution is nearly 400 FPS.
With something a bit more interesting, the 7-Zip compression test, the Core i7 970 with its six physical cores plus Intel Hyper Threading, was able to outperform the Core i5 2500K. The Core i5 2500K was delivering greater performance per-core, but the 7-Zip program was able to take advantage of all available processing cores on the i7-970, which was enough to put it in the lead. In this test, the Core i5 2500K performance was comparable to that of the Core i7 870.
When running the Parallel BZIP2 compression program, the Core i7 970 was still the forerunner even when the Core i5 2500K was overclocked above 4.00GHz.
Himeno, a Poisson Pressure Solver, found greater performance with the Core i5 2500K than with the Core i7 970 due to the much greater per-core performance with Sandy Bridge than Gulftown. In fact, the quad-core i5-2500K delivered 47% more MFLOPS than the i7-970 Gulftown.
While the Bullet Physics Engine is multi-threading friendly and is very computational heavy, the Core i5 2500K was able to edge itself past the Core i7 970 in all of these physics tests.
In some of the Bullet benchmarks it was a tight race between the i5-2500K and i7-970, but in other tests, such as the convex trimesh computation, the i5-2500K was able to deliver a noticeable lead and it continued to separate itself from the competition when overclocked.
C-Ray is one of our favorite multi-threaded ray-tracing benchmarks and here the performance of the Core i7 970 and Core i5 2500K when both were at their stock speeds was in a very tight race. The i5-2500K ended up winning when being overclocked to 4.0GHz and beyond, but at stock speeds, it was a very tough game. Regardless, Sandy Bridge remains quite compelling considering the core count between the i5-2500K and i7-970 as well as the price difference.
POV-Ray is an industry-standard benchmark, but POV-Ray 3.6 is single-threaded. The Sandy Bridge processor obviously won here with ease and it was about 11% faster than the high-end Gulftown.
With Smallpt, a very lightweight multi-threaded path tracing program, it preferred a greater number of CPU cores rather than the i5-2500K's greater per-core performance but without Hyper Threading. It took the Core i5 2500K running at 4.2GHz to deliver similar results to the Core i7 970.

The Core i7 970 was also the favored processor when it came to the scientific HMMer application with its Pfam database search.
When looking at the Solitaire performance with Minion, the i5-2500K performance was well in front of the i7-970 by a difference of over 30%.
The NASA NAS Parallel Benchmarks (NPB) are always interesting when looking at processors. With the CG.B test, the Core i5 2500K was marginally in front of the Core i7 970 and there were only slight gains when the Core i5 2500K was overclocked.
While the NPB CG.B test favored the Sandy Bridge micro-architecture, with the EP.B test it heavily preferred the older Core i7 970. When comparing the processors at their stock speeds, the Core i7 970 was approximately 71% faster in this test than the Core i5 2500K.
With the IS.C test, the table flipped again where the Core i5 2500K came out in front by 26%.
With the last NPB test in this article, SP.A, the i7-970 and i7-2500K performance when overclocked was nearly even.
When building the Apache web-server with GCC 4.4.5 and having the Phoronix Test Suite automatically set the job count to twice the number of available CPU threads, the Core i7 970 came out marginally ahead of the Core i5 2500K and it required the CPU being overclocked to 4.2GHz until its performance was similar.
The time though to build the Linux kernel is much more important since it's one of the biggest and most time consuming tasks that's built by a portion of the Linux users. The Core i7 970 remained faster with a time of about four and a half minutes to build the Linux 2.6.25 kernel. The quad-core Core i5 2500K meanwhile required nearly six minutes to build the Linux kernel when running at its stock speeds. However, a six-minute build for a quad-core CPU is still great and when overclocked its numbers are more in line with the six-core i7-970 that additionally offers Hyper Threading.
With CLOMP we are able to look at each processor's static OpenMP efficiency over the number of available cores. The Core i5 2500K numbers are right in line with the Core i5 750, which is the other Intel quad-core CPU used in this test that lacks Hyper Threading support.
When looking at the OpenSSL RSA 4096-bit signing performance, the Core i5 2500K had annihilated the Core i7 970. The quad-core Intel Core i5 2500K was 61% faster than the Intel Core i7 970 when both were at their stock speeds.
When looking at the x264 video encoding performance on the Core i5 2500K, its performance fell far short of the Intel Core i7 970 with its greater number of threads. Though the x264 video encoding performance for Sandy Bridge may be improved greatly once Intel releases their modified x264 library to take advantage of the transcoding support. The Intel developers are also working on video encoding acceleration on the Sandy Bridge CPU that would be exposed via VA-API, but it's not here yet, only the video decoding support on Linux is currently ready and available.
Ending out with some server-focused benchmarks, the PostgreSQL database server heavily favored the Core i7 970 and it is three times the number of threads as the i5-2500K. The Core i7 970 performance at stock speeds was 2.33x greater than that of the Sandy Bridge CPU we were testing. When this CPU was overclocked to 4.2GHz, the i7-970 was still 28% faster.
Lastly, with the usual Apache web-server benchmark, the Core i5 2500K performance was in front of the Core i7 970 by 36%.
There is no doubt about it: Intel's Sandy Bridge is fast. In fact, it is damn fast. The Core i5 2500K retails for just over $220 USD (Amazon.com and NewEgg.com), which is really quite a deal. As shown by many of the benchmarks, the Core i5 2500K commonly outperforms the Core i7 970 in all tests aside from those benchmarks heavily favor multi-threading with the six physical cores offered by the i7-970 plus Hyper Threading. The Core i7 970, however, retails for $900 USD. There is also the Core i5 2500 non-K processor that retails for about $10 less than the K version, with the sole difference being the 2500K being an unlocked processor so it will be able to overclock better. If doing any overclocking, you are best off with the K variant. The K variant does, however, lack VT-d support.
At approximately $100 more than the i5-2500K there is the Intel Core i7-2600K processor that is clocked at 3.4GHz with a Turbo Boost Frequency of 3.8GHz (versus 3.3GHz / 3.7GHz with the i5-2500K), has 8MB of L3 cache versus 6MB with the i5-2500K, and it also offers Hyper Threading. Unfortunately, however, we do not have access to an Intel Core i7 2600K to know how exactly that performs on Linux, but Windows publications have referred to the Intel Core i7 2600K as being the fastest quad-core CPU today.
Overall the Intel Sandy Bridge / Core i5 2500K performance on Linux is splendid and we are certainly confident in this quad-core processor that is delivered at a rather nice value. These new Intel CPUs should have no problems running great with Linux in conjunction with the new H67 / P67 motherboards assuming you are using a modern Linux distribution (i.e. Ubuntu 10.10). The only problem continuing to challenge us is the Intel HD Graphics support with Sandy Bridge, which is something we are continuing to tackle and by the time Ubuntu 11.04 rolls around it will hopefully be a pleasant "out of the box" experience for those running this new hardware.


How To Reverse Engineer A Motherboard BIOS

Since being let go by Novell last year where he worked on the RadeonHD Linux graphics driver and X.Org support within SuSE Linux, Luc Verhaegen has continued work on his VIA Unichrome DDX driver as well as other X.Org code and he has also become involved with the CoreBoot project that aims to create a free software BIOS for most chipsets and motherboards on the market. Luc has worked on support for flashing the BIOS on ATI graphics cards, native VGA text mode support, and other work to help the CoreBoot project. Today at FOSDEM in Brussels, Luc Verhaegen is about to give a talk on reverse engineering a motherboard BIOS.
We have included the slides from his presentation here and then once the talk is over we will be uploading a video recording on Phoronix. Luc goes over the steps from the tools needed to reverse engineer a motherboard BIOS, the different BIOS setups from Phoenix / AMI, and other steps involved.
i hope this help use in the future :)