Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153

Deprecated: Function split() is deprecated in /home/content/k/e/l/kellymacintyre/html/twibs_v3/include/magpierss/rss_parse.inc on line 153
google (A Googler) is a Business on Twitter - Blog - Twibs

A Googler (@google)

2,862,621 Followers

352 Friends


Using TensorFlow to keep farmers happy and cows healthy
Thu, 18 Jan 2018 20:20:00 +0000

Editor?s Note: TensorFlow, our open source machine learning library, is just that?open to anyone. Companies, nonprofits, researchers and developers have used TensorFlow in some pretty cool ways, and we?re sharing those stories here on Keyword. Today we hear from Yasir Khokhar and Saad Ansari, founders of Connecterra, who are applying machine learning to an unexpected field: dairy farming.


Connecterra means ?connected earth.? We formed the company based on a simple thesis: if we could use technology to make sense of data from the natural world, then we could make a real impact in solving the pressing problems of our time.


It all started when Yasir moved to a farm in the Netherlands, near Amsterdam. We had both spent many years working in the technology industry, and realized that the dairy industry was a sector where technology could make a dramatic impact. For instance, we saw that the only difference between cows that produce 30 liters of milk a day and those that produce 10 liters was the animal?s health. We wondered?could technology make cows healthier, and in doing so, help farmers grow their businesses?


That thinking spurred us to start working weekends and evenings on what would eventually become Ida?a product that uses TensorFlow, Google?s machine learning framework, to understand and interpret the behavior of cows and give farmers insights about their herds? health.


Ida learns patterns about a cow?s movements from a wearable sensor. We use this data to train machine learning models in TensorFlow, and ultimately, Ida can detect activities from eating, drinking, resting, fertility, temperature and more. It?s not just tracking this information, though. We use Ida to predict problems early, detecting cases like lameness or digestive disorders, and provide recommendations to farmers on how to keep their cows healthy and improve the efficiency of their farms. Using these insights, we're already seeing a 30 percent increase in dairy production on our customers? farms.


By 2050, the world will have 9 billion people, and we need a 60 percent increase in food production to feed them. Dairy farmer assistance is just one example of how AI could be used to help solve important issues like this. And at Connecterra, by using AI to create solutions to big problems, we think technology can make a real impact.




VMware puts its focus on Android enterprise
Thu, 18 Jan 2018 16:00:00 +0000

Over the last year, we?ve added a number of new features to Android's modern management modes to enhance security and simplify deployment for IT admins.

Our partners?leaders in the enterprise mobility ecosystem?haven?t been standing still either. We love to support and recognize the great work they?re doing to help customers adopt Android's latest capabilities.

For example, enterprise mobility management (EMM) partner VMware recently announced it?s shifting the default deployment model in the next major release of the VMware AirWatch console to Android enterprise. Customers that use AirWatch to manage their organization?s Android devices will benefit from our modern APIs that support the work profile and device owner mode.

As VMware notes on its blog, admins can trust the work profile to keep company data separate and secure on employees? devices. Team members can turn off work apps for those times they want some work-life balance, while also gaining the assurance their personal data remains private. For companies that deploy their own devices, VMware and other partners support our strong and flexible tools for management.

We're excited to see partners like VMware help customers embrace the latest Android has to offer. For those interested, VMware has released a walkthrough guide, which is available in VMware TestDrive, that's a good place for customers to get started.

VMware?s transition to Android enterprise is a great example of how one of our partners is embracing the modern APIs and latest capabilities of our secure and flexible platform. We?re looking forward to seeing further innovations from our partners that will accelerate what businesses can accomplish with enterprise mobility.



Huawei to integrate Android Messages across their Android smartphone portfolio
Thu, 18 Jan 2018 03:00:00 +0000

Over the coming months, Huawei will make it even easier for hundreds of millions of people to express themselves via mobile messaging by integrating Android Messages, powered by RCS, across their Android smartphone portfolio.

With Android Messages and RCS messaging, Huawei devices will now offer a richer native messaging and communications experience. Features such as texting over Wi-Fi, rich media sharing, group chats, and typing indicators will now be a default part of the device. Messages from businesses will also be upgraded on Huawei?s devices through RCS business messaging. And Huawei users will be able to make video calls directly from Android Messages through carrier ViLTE and Google Duo.

In addition, to help carriers accelerate deployment of RCS messaging across their networks, we?re collaborating with Huawei to offer the Jibe RCS cloud and hub solution to current and prospective carrier partners, as part of an integrated solution with Huawei's current infrastructure. This will enable a faster process for RCS services so more subscribers can get access to RCS messaging.

Huawei will begin integrating Android Messages across their portfolio in the coming months. For more information, see the following release.



The She Word: going behind hardware design with Ivy Ross
Wed, 17 Jan 2018 20:35:00 +0000

Editor?s Note: The She Word is a Keyword series all about powerful, dynamic and creative women at Google. Intrigued by the unique aesthetic of Google?s new family of hardware devices released in October, we sat down with the woman who leads the design team: Ivy Ross. In the interview below, she shares with us how she approaches design at work, and life outside of work.

Ivy_Ross_128.jpg

How do you explain your job at a dinner party?

I lead a team that creates how a Google product?including Google Home, the Pixel laptop and wearables?looks, feels and acts when you hold it in your hands.

What advice would you give to women starting out in their careers?

Be fearless in using your heart and mind in what you do, and bring more beauty into the world.

Who has been a strong female influence in your life?

My daughter. Seeing the world through her eyes at various stages of her life has given me a ?beginner?s mind? in much of what I do.

What did you want to be when you grew up?

I?ve always wanted to be a designer/maker. My dad, who had a big influence on me, was an industrial designer and built the house I grew up in?the house was so ahead of its time that Andy Warhol used it to shoot a movie back in the late 70?s.

When I was 12 years old, I made a dress out of chain mail metal and wore it to a bar mitzvah. I linked together thousands of metal squares that made up the dress, designed a necklace that attached to the dress, and made a purse out of the chain mail to match. Even back then, I was designing for efficiency! Instead of bringing needle and thread in case the dress ripped, I carried a screwdriver.

ivy dress.jpg
Ivy in her homemade dress (screwdriver not pictured).

What is one habit that makes you successful?

Trusting my instincts on both people and ideas.

How is designing hardware different than designing software?

Unlike software, you can?t fix hardware through a new release or update. You need more time up front because once something is tooled, you can make very few adjustments.

What is the most important design principle for Google?s hardware?

Human. By that I mean friendly, emotionally-appealing and easy to fit into your life and your home. I believe more time we spend in front of flat screens, the more we?ll crave soft and tactile three-dimensional shapes. This is reflected in the fabric in Home Mini, Home Max and Daydream View, the texture of Pixel phones and Pixel Books, and the soft silicon pad where you rest your wrist while typing on the PixelBook.

Are there any design innovations you?re especially proud of in this year?s hardware lineup?

The way we used fabric for Home Mini was not an easy path. It required special construction to accomplish the simplicity of the form with great acoustics. Some of the things that look the simplest can actually be the hardest to construct! I?m proud that we created a beautiful group of products without sacrificing their function.

I?m proud that we created a beautiful group of products without sacrificing their function.

Where do you find inspiration for your work?

I don?t spend much time looking at other electronics beyond what I need to understand about the market. You can?t create anything new by only looking within your own category so I draw inspiration from art, materials, furniture, music, nature and people. My dad taught how to look at something and see more than what appears on the surface.

You're also a jewelry designer with big accomplishments at a young age. What did you learn from that?

Having gotten my work in museums around the world by age 25, I realized that life is not about the end goal, it?s about the journey and the adventure along the way with others.



Exploring art (through selfies) with Google Arts & Culture
Wed, 17 Jan 2018 18:50:00 +0000

The Google Arts & Culture platform hosts millions of artifacts and pieces of art, ranging from prehistory to the contemporary, shared by museums across the world. But the prospect of exploring all that art can be daunting. To make it easier, we dreamt up a fun solution: connect people to art by way of a fundamental artistic pursuit, the search for the self ? or, in this case, the selfie.

We created an experiment that matches your selfie with art from the collections of museums on Google Arts & Culture?and over the past few days, people have taken more than 30 million selfies. Even if your art look-alike is a surprise, we hope you discover something new in the process. (By the way, Google doesn't use your selfie for anything else and only keeps it for the time it takes to search for matches.)

gif

That?s me, Michelle, the product manager for this feature!

And we hope you?ll keep exploring. There?s so much to see on Google Arts & Culture, from the annals of American Democracy and the rich history of Latino cultures in the U.S., to the wide world of Street Art and the intricacies of Japanese crafts and traditions. You can visit the rooftop of the Taj Mahal or the famous castles of France's Loire Valley or even tour the United States? National Parks, all from a mobile device. We also recommend checking out the stories behind what you wear?this collection lets you browse more than 30,000 pieces from 3,000 years fashion history: try searching for hats and sort them by color or sort shoes by time. So cool.

At Google Arts & Culture, our software engineers are always experimenting with new and creative ways to connect you with art and culture. That?s how this selfie feature came about, too. We know there?s great demand to improve and expand the selfie-matching feature to more locations, including outside the U.S., and we?ll share more news as soon as we have it. We?ll continue to partner with more museums to bring diverse cultures from every part of the world online (any museum can join!), so you can explore their stories and find even more portraits.

In the meantime, you can download the Google Arts & Culture app for iOS or Android and get face to face ? with art!



Bill Protection on Project Fi: data when you need it, and savings when you don?t
Wed, 17 Jan 2018 17:00:00 +0000

With Project Fi, we built our $10/GB ?pay for what you use? pricing to put you in control of your phone plan and how much you pay for it. Today, we?re taking the next step in that journey with Bill Protection: a new take on a phone plan that combines the simplicity of our existing pricing with the flexibility of an unlimited plan.

BP Data - Updated

Data when you need it

Bill Protection gives you the peace of mind to use extra data when you need it. In months when you use more than 6 GB of data, we?ll cap your charges for calls & texts plus data at $80, and allow you to continue using high speed data for free?similar to an unlimited plan. Bill Protection kicks in at different usage points based on the number of people on your plan, and you can see how it would work for your group here.

If you?re a super heavy data user, you?ll experience slower data speeds in months when you?ve consumed more than 15 GB of data (less than 1% of current Fi users today). But as always, you?ll have the power to customize your plan, and you can opt out of slower speeds by paying $10/GB for your individual data usage above 15 GB.

Never pay for data you don?t use

And here?s the kicker: with Bill Protection you?ll never have to pay for unlimited data in months when you don?t actually need it. If you only use 1.4 GB of data, at the end of the month you?ll pay just $34 instead of $80. So no matter how much data you use, you can save money with Bill Protection every month. 

Bill Protection - Data 1

All the data you need for the Project Fi perks

Finally, Bill Protection still applies to all of the Project Fi goodies you love, including high speed data in 135+ countries, and data-only SIM cards to use in your laptop, tablet or car. If you?re traveling abroad, that means you can use all of the apps you need?there?s no need to stress about the extra data.

Bill Protection begins rolling out today to individual subscribers and group plans. If you?re a current Fi subscriber, you?ll see it appear on your next billing cycle. For those not yet signed up for Fi, we?re making it easier to try it out by offering up to $120 off some of our Fi-ready phones for a limited time.



Introducing the security center for G Suite?security analytics and best practices from Google
Wed, 17 Jan 2018 16:00:00 +0000

We want to make it easy for you to manage your organization?s data security. A big part of this is making sure you and your admins can access a bird?s eye view of your security?and, more importantly, that you can take action based on timely insights.

Today, we?re introducing the security center for G Suite, a tool that brings together security analytics, actionable insights and best practice recommendations from Google to empower you to protect your organization, data and users.

With the security center, key executives and admins can do things like:

1. See a snapshot of important security metrics in one place. 

Get insights into suspicious device activity, visibility into how spam and malware are targeting users within your organization and metrics to demonstrate security effectiveness?all in a unified dashboard.

Security Center GA - 1

2. Stay ahead of potential threats. 

Admins can now examine security analytics to flag threats. For example, your team can have visibility into which users are being targeted by phishing so that you can head off potential attacks, or when Google Drive files trigger DLP rules, you have a heads up to avoid risking data exfiltration.

Security Center - 2

3. Reduce risk by adopting security health recommendations.

Security health analyzes your existing security posture and gives you customized advice to secure your users and data. These recommendations cover issues ranging from how your data is stored, to how your files are shared, as well as recommendations on mobility and communications settings.  

Security Center GA - 3

Get started

More than 3.5 million organizations rely on G Suite to collaborate securely. If you?re a G Suite Enterprise customer, you?ll be able to access the security center within the Admin console automatically in the next few days. These instructions can help admins get started and here are some security best practices to keep in mind.

If you?re new to G Suite, learn more about about how you can collaborate, store and communicate securely.



Cloud AutoML: Making AI accessible to every business
Wed, 17 Jan 2018 14:00:00 +0000

When we both joined Google Cloud just over a year ago, we embarked on a mission to democratize AI. Our goal was to lower the barrier of entry and make AI available to the largest possible community of developers, researchers and businesses.

Our Google Cloud AI team has been making good progress towards this goal. In 2017, we introduced Google Cloud Machine Learning Engine, to help developers with machine learning expertise easily build ML models that work on any type of data, of any size. We showed how modern machine learning services, i.e., APIs?including Vision, Speech, NLP, Translation and Dialogflow?could be built upon pre-trained models to bring unmatched scale and speed to business applications. Kaggle, our community of data scientists and ML researchers, has grown to more than one million members. And today, more than 10,000 businesses are using Google Cloud AI services, including companies like Box, Rolls Royce Marine, Kewpie and Ocado.

But there?s much more we can do. Currently, only a handful of businesses in the world have access to the talent and budgets needed to fully appreciate the advancements of ML and AI. There?s a very limited number of people that can create advanced machine learning models. And if you?re one of the companies that has access to ML/AI engineers, you still have to manage the time-intensive and complicated process of building your own custom ML model. While Google has offered pre-trained machine learning models via APIs that perform specific tasks, there's still a long road ahead if we want to bring AI to everyone.

To close this gap, and to make AI accessible to every business, we?re introducing Cloud AutoML. Cloud AutoML helps businesses with limited ML expertise start building their own high-quality custom models by using advanced techniques like learning2learn and transfer learning from Google. We believe Cloud AutoML will make AI experts even more productive, advance new fields in AI and help less-skilled engineers build powerful AI systems they previously only dreamed of.

Our first Cloud AutoML release will be Cloud AutoML Vision, a service that makes it faster and easier to create custom ML models for image recognition. Its drag-and-drop interface lets you easily upload images, train and manage models, and then deploy those trained models directly on Google Cloud. Early results using Cloud AutoML Vision to classify popular public datasets like ImageNet and CIFAR have shown more accurate results with fewer misclassifications than generic ML APIs.

Here?s a little more on what Cloud AutoML Vision has to offer:

  • Increased accuracy: Cloud AutoML Vision is built on Google?s leading image recognition approaches, including transfer learning and neural architecture search technologies. This means you?ll get a more accurate model even if your business has limited machine learning expertise.

  • Faster turnaround time to production-ready models: With Cloud AutoML, you can create a simple model in minutes to pilot your AI-enabled application, or build out a full, production-ready model in as little as a day.

  • Easy to use: AutoML Vision provides a simple graphical user interface that lets you specify data, then turns that data into a high quality model customized for your specific needs.

AutoML

?Urban Outfitters is constantly looking for new ways to enhance our customers? shopping experience," says Alan Rosenwinkel, Data Scientist at URBN. "Creating and maintaining a comprehensive set of product attributes is critical to providing our customers relevant product recommendations, accurate search results and helpful product filters; however, manually creating product attributes is arduous and time-consuming. To address this, our team has been evaluating Cloud AutoML to automate the product attribution process by recognizing nuanced product characteristics like patterns and neckline styles. Cloud AutoML has great promise to help our customers with better discovery, recommendation and search experiences."

Mike White, CTO and SVP, for Disney Consumer Products and Interactive Media, says: ?Cloud AutoML?s technology is helping us build vision models to annotate our products with Disney characters, product categories and colors. These annotations are being integrated into our search engine to enhance the impact on Guest experience through more relevant search results, expedited discovery and product recommendations on shopDisney.?

And Sophie Maxwell, Conservation Technology Lead at the Zoological Society of London, tells us: "ZSL is an international conservation charity devoted to the worldwide conservation of animals and their habitats. A key requirement to deliver on this mission is to track wildlife populations to learn more about their distribution and better understand the impact humans are having on these species. In order to achieve this, ZSL has deployed a series of camera traps in the wild that take pictures of passing animals when triggered by heat or motion. The millions of images captured by these devices are then manually analysed and annotated with the relevant species, such as elephants, lions and giraffes, etc., which is a labour-intensive and expensive process. ZSL?s dedicated Conservation Technology Unit has been collaborating closely with Google?s Cloud ML team to help shape the development of this exciting technology, which ZSL aims to use to automate the tagging of these images?cutting costs, enabling wider-scale deployments and gaining a deeper understanding of how to conserve the world?s wildlife effectively."

If you?re interested in trying out AutoML Vision, you can request access via this form.

AutoML Vision is the result of our close collaboration with Google Brain and other Google AI teams, and is the first of several Cloud AutoML products in development. While we?re still at the beginning of our journey to make AI more accessible, we?ve been deeply inspired by what our 10,000+ customers using Cloud AI products have been able to achieve. We hope the release of Cloud AutoML will help even more businesses discover what?s possible through AI.

References



A new pathway to roles in IT Support
Tue, 16 Jan 2018 14:00:00 +0000

Today, we?re launching the Google IT Support Professional Certificate hosted on Coursera?a first-of-its-kind online program to prepare people for roles in IT support. With no previous experience required, beginning learners can become entry-level job ready in eight to 12 months. This program is part of Grow with Google, our initiative to help people get the skills they need to find a job.


There?s no better example of a dynamic, fast-growing field than IT support. With more and more people relying on computers for some part of their work, growth in IT support is outpacing the average rate for all other occupations. In the United States alone, there are currently 150,000 open IT support jobs (according to Burning Glass), and the average starting salary is $52,000 according to the Bureau of Labor and Statistics.  

I helped hire Google?s IT staff for several years when I led our internal IT support program; it was often challenging to find qualified candidates. But I knew that candidates didn't need traditional four-year college degrees to be qualified?and also found that IT was very teachable. So in 2014 we partnered with the nonprofit organization Year Up to create a program aimed at training and hiring non-traditional talent for IT support internships and full-time roles. The program was a success, and its graduates inspired us to think about how we could make a bigger impact beyond Google. Watch the story of one of our program graduates, Edgar Barragan:

Edgar Barragan: IT Support Specialist

Now we?re using the training we implemented at Google as the basis of a new program available to anyone, anywhere, as part of the Grow with Google initiative. No tech experience or college degree is necessary.


With over 64 hours of video lessons and a dynamic mix of hands-on labs and other interactive assessments, all developed by Googlers, this certificate program introduces people to troubleshooting and customer service, networking, operating systems, system administration, automation, and security?all the fundamentals of IT support. Throughout the program, people will hear directly from Googlers whose own foundation in IT support served as a jumping-off point for their careers.


Since we know training is just the first step, we also want to help with the next one?the job search. Once people complete the certificate, they can opt in to share their information directly with top employers, including Bank of America, Walmart, Sprint, GE Digital, PNC Bank, Infosys, TEKSystems, UPMC, and of course, Google, all who are looking to hire IT support talent.


To ensure job seekers from all backgrounds have access to the program, we?re subsidizing the cost of the certificate on Coursera to $49/month and providing financial support to more than 10,000 learners in the United States. Need-based scholarships, funded by Google.org grants, will be offered through leading nonprofits focused on underrepresented communities including Year Up, Per Scholas, Goodwill, Student Veterans of America, and Upwardly Global. Full financial assistance is also available to those who qualify.  


You can find out more and enroll at the Google IT Support page on Coursera.


I?ve seen firsthand how educational opportunities can transform people?s careers and lives. By making the Google IT Support Professional Certificate accessible on Coursera, we hope to open the door for everyone to begin a career in technology.



Expanding our global infrastructure with new regions and subsea cables
Tue, 16 Jan 2018 13:00:00 +0000

At Google, we've spent $30 billion improving our infrastructure over three years, and we?re not done yet. From data centers to subsea cables, Google is committed to connecting the world and serving our Cloud customers, and today we?re excited to announce that we?re adding three new submarine cables, and five new regions.

We?ll open our Netherlands and Montreal regions in the first quarter of 2018, followed by Los Angeles, Finland, and Hong Kong ? with more to come. Then, in 2019 we?ll commission three subsea cables: Curie, a private cable connecting Chile to Los Angeles; Havfrue, a consortium cable connecting the U.S. to Denmark and Ireland; and the Hong Kong-Guam Cable system (HK-G), a consortium cable interconnecting major subsea communication hubs in Asia.  

Together, these investments further improve our network?the world?s largest?which by some accounts delivers 25% of worldwide internet traffic. Companies like PayPal leverage our network and infrastructure to run their businesses effectively.

?At PayPal, we process billions of transactions across the globe, and need to do so securely, instantaneously and economically. As a result, security, networking and infrastructure were key considerations for us when choosing a cloud provider,? said Sri Shivananda, PayPal?s Senior Vice President and Chief Technology Officer. ?With Google Cloud, we have access to the world?s largest network, which helps us reach our infrastructure goals and best serve our millions of users.?

infrastructure-1
Figure 1. Diagram shows existing GCP regions and upcoming GCP regions
infrastructure-2
Figure 2. Diagram shows three new subsea cable investments, expanding capacity to Chile, Asia Pacific and across the Atlantic

Curie cable

Our investment in the Curie cable (named after renowned scientist Marie Curie) is part of our ongoing commitment to improve global infrastructure. In 2008, we were the first tech company to invest in a subsea cable as a part of a consortium. With Curie, we become the first major non-telecom company to build a private intercontinental cable.

By deploying our own private subsea cable, we help improve global connectivity while providing value to our customers. Owning the cable ourselves has some distinct benefits. Since we control the design and construction process, we can fully define the cable?s technical specifications, streamline deployment and deliver service to users and customers faster. Also, once the cable is deployed, we can make routing decisions that optimize for latency and availability.

Curie will be the first subsea cable to land in Chile in almost 20 years. Once deployed, Curie will be Chile?s largest single data pipe. It will serve Google users and customers across Latin America.

Havfrue cable

To increase capacity and resiliency in our North Atlantic systems, we?re working with Facebook, Aqua Comms and Bulk Infrastructure to build a direct submarine cable system connecting the U.S. to Denmark and Ireland. This cable, called Havfrue (Danish for ?mermaid?), will be built by TE SubCom and is expected to come online by the end of 2019. The marine route survey, during which the supplier determines the specific route the cable will take, is already underway.

HK-G cable

In the Pacific, we?re working with RTI-C and NEC on the Hong Kong-Guam cable system. Together with Indigo and other existing subsea systems, this cable creates multiple scalable, diverse paths to Australia, increasing our resilience in the Pacific. As a result, customers will experience improved capacity and latency from Australia to major hubs in Asia. It will also increase our network capacity at our new Hong Kong region.
infrastructure-3

Figure 3. A complete list of Google?s subsea cable investments. New cables in this announcement are highlighted yellow. Google subsea cables provide reliability, speed and security not available from any other cloud.

Google has direct investment in 11 cables, including those planned or under construction. The three cables highlighted in yellow are being announced in this blog post. (In addition to these 11 cables where Google has direct ownership, we also lease capacity on numerous additional submarine cables.)

What does this mean for our customers?

These new investments expand our existing cloud network. The Google network has over 100 points of presence (map) and over 7,500 edge caching nodes (map). This investment means faster and more reliable connectivity for all our users.

Simply put, it wouldn?t be possible to deliver products like Machine Learning Engine, Spanner, BigQuery and other Google Cloud Platform and G Suite services at the quality of service users expect without the Google network. Our cable systems provide the speed, capacity and reliability Google is known for worldwide, and at Google Cloud, our customers are able to to make use of the same network infrastructure that powers Google?s own services.

While we haven?t hastened the speed of light, we have built a superior cloud network as a result of the well-provisioned direct paths between our cloud and end-users, as shown in the figure below.

infrastructure-4

Figure 4. The Google network offers better reliability, speed and security performance as compared with the nondeterministic performance of the public internet, or other cloud networks. The Google network consists of fiber optic links and subsea cables between 100+ points of presence, 7500+ edge node locations, 90+ Cloud CDN  locations, 47 dedicated interconnect locations and 15 GCP regions.

We?re excited about these improvements. We're increasing our commitment to ensure users have the best connections in this increasingly connected world.



Eight things you need to know about Hash Code 2018
Tue, 16 Jan 2018 09:00:00 +0000

Are you up for a coding challenge? Team up to solve an engineering problem from Google?registration for Hash Code 2018 is now open.  

Hash Code is Google?s flagship team programming competition for students and professionals in  Europe, the Middle East, and Africa. You pick your team and programming language, we pick a Google engineering problem for you to solve. Thinking about competing in Hash Code? Here?s what you need to know before you sign up:

1. This is the fifth edition of Hash Code. Hash Code started in 2014 with just 200 participants. We?ve grown a bit since the early days?last year more than 26,000 developers teamed up to compete from 100+ countries across Europe, the Middle East and Africa.

2. Problems are modeled after Google engineering challenges. We want participants to experience what software engineering is like at Google, so we model Hash Code problems after challenges faced by Google engineering teams. Past problems have included optimizing video serving on YouTube, routing Street View cars through a busy city, and optimizing the layout of a Google data center.  

3. You compete in a small team (just like engineers at Google!). To compete in Hash Code, you need to form a team of two to four people. This means it?s not just about what you know individually, but about how you and your team can work together to tackle the problem.

4. Hash Code kicks off with an Online Qualification Round on Thursday, March 1. It all starts with a YouTube livestream at 18:30 CET sharp, after which the problem is released and teams have four hours to code. 

5. Hubs add extra excitement to the Online Qualification Round. Hubs are meetups where teams in the same area can come together to compete in the Online Qualification Round. They?re also a great opportunity for you to connect with other developers in your community. More than 300 hubs have been registered so far, and it?s not too late to organize a hub if there isn?t one near you already.

Some competitors in the 2017 Hash Code Online Qualification Round
Some competitors having fun at a few of the hubs during the 2017 Hash Code Online Qualification Round.
6. The Final Round will be held at Google Dublin. Top teams from the Online Qualification Round will be invited to our European Headquarters in April to vie for the title of Hash Code 2018 Champion.

7. It's a competition?but it's also about having fun! As Ingrid von Glehn, a software engineer at Google London who is part of the Hash Code organizing team, puts it: ?We design the problems to be challenging, but not intimidating. It?s important to us that everyone has fun while taking part.? 

Join in on all the fun online through our Facebook event and G+ community, using the #hashcode tag. These channels are also great spaces to connect with other engineers and find team members.

Hash Code 2018

8. You can register todayReady to accept the challenge? Be sure to sign up before registration closes on February 26.

*Featured image: Teams hard at work tackling our wireless router placement problem during 2017?s Final Round in Paris. 



#teampixel community member Austin Cameron is living for the city
Fri, 12 Jan 2018 19:45:00 +0000

Happy New Year, Team Pixel! There are so many picture-worthy moments ahead. Helping us get started on 2018 photography is Pixel enthusiast and photographer @ustincameron. He?s a regular #teampixel contributor who?s working through a personal goal of shooting a photo a day for 1,000 days?with more than 700 already under his belt!   

He has a talent for shooting in low light, so we reached out to get some tips and find out more about his approach to shooting the nation's most popular cities.

?Cityscapes are a fun challenge,? Austin says. ?For most people, the skyline is already iconic, so I like to try and make them do a double take by showcasing it from an entirely different perspective than previously recognized.?


@ustincameron?s tips for shooting in low light situations:

  • Do your best to prevent light pollution from entering your frame.
  • Make sure to set the focus on dark areas with details you want to bring out.
  • Don?t be scared to lay on the ground for the perfect shot!

Keep tagging your photos with #teampixel and you might be featured next.



The High Five: you get a search, you get a search, everybody gets a search!
Fri, 12 Jan 2018 18:40:00 +0000

Oprah?s speech had people buzzing, while Jimmy Ma spun to internet fame at the U.S. Figure Skating Championships. Here are some of the most-searched trends of the week (with data from the Google News Lab).

A brighter morning, even during our darkest nights

?Is Oprah going to run for president?? was a top searched question this week, after the icon?s rousing speech at the Golden Globes. Searches for ?Oprah for President? were up more than 5,000 percent, and search interest in ?Oprah 2020? was 1,200 percent higher than ?Trump 2020.? And the region with the most searches for ?Oprah 2020?? Home of the White House, Washington, D.C.

Making waves

The recent raw water trend has people wondering whether drinking untreated water is actually good for you, and search queries poured in: ?How is well water different from raw water?? ?Who endorses raw water?? and ?How much does raw water cost?? This week, searches in ?raw water? were 800 percent higher than ?raw milk? and 300 percent higher than ?raw food.?

Roll tide

Alabama Crimson Tide freshman quarterback Tua Tagovailoa had his moment in the search spotlight this week. After leading his team to an overtime victory in the College Football Playoff National Championship, searches for his name increased nearly 7,000 percent, and searches are interested in his names, his stats, and his hands (which are reportedly quite large, and were searched 450 percent more than famously large-handed NFL quarterback Russell Wilson).

Ice skating turns up

Search interest in figure skater Jimmy Ma jumped 1,300 percent this week after he brought hip hop to the ice skating rink. His routine at the U.S. Figure Skating Championships featured Lil Jon?s hit song ?Turn Down for What,? prompting these top searches: ?Jimmy Ma freestyle,? ?Jimmy Ma goes viral,? and ?Jimmy Ma hiphop ice skating routine.?

What happens in Vegas ?

Will stay in tech news. The Consumer Electronics Show (CES), which showcases future tech products, took place in Las Vegas this week. Some technical difficulties meant that ?CES power outage? was searched 150 percent more than ?CES news.? Other top searches about the event were ?When is CES 2018?? ?What does CES stand for?? and ?How to go to CES.?



A doodle celebrating Zhou Youguang and the ABCs of learning Mandarin
Fri, 12 Jan 2018 11:15:00 +0000

Mandarin Chinese is a tremendously rich logographic language, meaning every word is represented by a unique character or combination of characters. And there are a lot?the largest Chinese dictionaries contain more than 60,000 different ones.   


The sheer volume makes it challenging for non-native speakers to master Mandarin. As anyone who has studied the language knows, it?s difficult remembering the pronunciations of thousands of characters!


Thanks to Zhou Youguang?s work, it?s now a lot easier to learn Mandarin. An economist by training, in the 1950s, he was tasked by the Chinese government to turn Chinese characters into words with Roman letters. Over three years, Zhou developed pinyin, a phonetic alphabet for Mandarin. With the help of just 26 letters of the Roman alphabet and four tonal marks, pinyin allows for the accurate pronunciation of any of Mandarin?s 60,000 or so characters, no matter how obscure. It?s thanks to Zhou that we can learn ???? is pronounced ?p?n y?n? by reading its phonetic spelling, instead of listening to someone else pronounce it first.


So today?s doodle in countries including Argentina, Chile, Indonesia, Japan, New Zealand, Singapore, Sweden and the U.S. celebrates Zhou?s 112th birthday. Zhou passed away at the ripe old age of 111 last year. He lived long enough to see people using pinyin to type Mandarin characters on computers and mobile phones. By inventing pinyin, Zhou didn?t just help generations of students learn Mandarin. He also paved the way for a new generation of Mandarin speakers to communicate online.



Stick to your New Year?s resolutions with a little help from Google Home
Thu, 11 Jan 2018 18:30:00 +0000

In 2018, I?m committed to getting in better shape. As with all New Year?s resolutions, the hard part will be actually sticking to it. But this year, I?ll have help from my Google Assistant. No matter what your resolution is, here are a few ways your Google Home, Mini or Max can keep you on track:

Thanks to my Assistant on Google Home, 2018 is the year I?m actually sticking to my resolution.




Protecting our Google Cloud customers from new vulnerabilities without impacting performance
Thu, 11 Jan 2018 16:00:00 +0000

If you?ve been keeping up on the latest tech news, you?ve undoubtedly heard about the CPU security flaw that Google?s Project Zero disclosed last Wednesday. On Friday, we answered some of your questions and detailed how we are protecting Cloud customers. Today, we?d like to go into even more detail on how we?ve protected Google Cloud products against these speculative execution vulnerabilities, and what we did to make sure our Google Cloud customers saw minimal performance impact from these mitigations.

Modern CPUs and operating systems protect programs and users by putting a ?wall" around them so that one application, or user, can?t read what?s stored in another application?s memory. These boundaries are enforced by the CPU.

But as we disclosed last week, Project Zero discovered techniques that can circumvent these protections in some cases, allowing one application to read the private memory of another, potentially exposing sensitive information.

The vulnerabilities come in three variants, each of which must be protected against individually. Variant 1 and Variant 2 have also been referred to as ?Spectre.? Variant 3 has been referred to as ?Meltdown.? Project Zero described these in technical detail, the Google Security blog described how we?re protecting users across all Google products, and we explained how we?re protecting Google Cloud customers and provided guidance on security best practices for customers who use their own operating systems with Google Cloud services.

Surprisingly, these vulnerabilities have been present in most computers for nearly 20 years. Because the vulnerabilities exploit features that are foundational to most modern CPUs?and were previously believed to be secure?they weren?t just hard to find, they were even harder to fix. For months, hundreds of engineers across Google and other companies worked continuously to understand these new vulnerabilities and find mitigations for them.

In September, we began deploying solutions for both Variants 1 and 3 to the production infrastructure that underpins all Google products?from Cloud services to Gmail, Search and Drive?and more-refined solutions in October. Thanks to extensive performance tuning work, these protections caused no perceptible impact in our cloud and required no customer downtime in part due to Google Cloud Platform?s Live Migration technology. No GCP customer or internal team has reported any performance degradation.

While those solutions addressed Variants 1 and 3, it was clear from the outset that Variant 2 was going to be much harder to mitigate. For several months, it appeared that disabling the vulnerable CPU features would be the only option for protecting all our workloads against Variant 2. While that was certain to work, it would also disable key performance-boosting CPU features, thus slowing down applications considerably.

Not only did we see considerable slowdowns for many applications, we also noticed inconsistent performance, since the speed of one application could be impacted by the behavior of other applications running on the same core. Rolling out these mitigations would have negatively impacted many customers.

With the performance characteristics uncertain, we started looking for a ?moonshot??a way to mitigate Variant 2 without hardware support. Finally, inspiration struck in the form of ?Retpoline??a novel software binary modification technique that prevents branch-target-injection, created by Paul Turner, a software engineer who is part of our Technical Infrastructure group. With Retpoline, we didn't need to disable speculative execution or other hardware features. Instead, this solution modifies programs to ensure that execution cannot be influenced by an attacker.

With Retpoline, we could protect our infrastructure at compile-time, with no source-code modifications. Furthermore, testing this feature, particularly when combined with optimizations such as software branch prediction hints, demonstrated that this protection came with almost no performance loss.

We immediately began deploying this solution across our infrastructure. In addition to sharing the technique with industry partners upon its creation, we open-sourced our compiler implementation in the interest of protecting all users.

By December, all Google Cloud Platform (GCP) services had protections in place for all known variants of the vulnerability. During the entire update process, nobody noticed: we received no customer support tickets related to the updates. This confirmed our internal assessment that in real-world use, the performance-optimized updates Google deployed do not have a material effect on workloads.

We believe that Retpoline-based protection is the best-performing solution for Variant 2 on current hardware. Retpoline fully protects against Variant 2 without impacting customer performance on all of our platforms. In sharing our research publicly, we hope that this can be universally deployed to improve the cloud experience industry-wide.

This set of vulnerabilities was perhaps the most challenging and hardest to fix in a decade, requiring changes to many layers of the software stack. It also required broad industry collaboration since the scope of the vulnerabilities was so widespread. Because of the extreme circumstances of extensive impact and the complexity involved in developing fixes, the response to this issue has been one of the few times that Project Zero made an exception to its 90-day disclosure policy.

While these vulnerabilities represent a new class of attack, they're just a few among the many different types of threats our infrastructure is designed to defend against every day. Our infrastructure includes mitigations by design and defense-in-depth, and we?re committed to ongoing research and contributions to the security community and to protecting our customers as new vulnerabilities are discovered.



Seven kinds of Local Guides you might spot on Google Maps
Wed, 10 Jan 2018 16:00:00 +0000

What kind are you?

Satellites are famously effective for mapping, but they don?t take photos of must-have breakfast sandwiches, update hours of operation or tell families when places are wheelchair accessible. That?s Local Guides territory. Local Guides are people who share information on Google Maps to help others discover where to go?and there are more than 60 million of them in our global community, with the most prolific contributors hailing from the United States, India and Brazil. They guide worldwide users each day, rack up millions of views, support small businesses and literally put important, sometimes vital, information on the map for others to use.

Anyone can become a Local Guide?and once you do, you'll become part of a dynamic community. Each contributor is different, with specific passions and ways of sharing. Here are seven inspiring specialists we?ve spotted, with tips on how to do what they do.

1. The visualist

Local Guides love taking photos?in fact, they shared more than 300 million of them on Google Maps last year. If you?re a visualist, it?s your favorite way to contribute.

Loves: Seeking photogenic spots, finding the beauty in everyday places, making the most of golden hour.

Tip: You can share your shots of places right from Google Photos. Just tap the share icon on Android and select Add to Maps. Then select or update the location before you post it.

The Visualist.jpg

2. The fact hunter

In many parts of the world, essential information like where to find an ATM or a clinic may be hard to come by. Fact hunters uncover these details to share with others on Google Maps.

Loves: Accurate listings on Google Maps, adding missing info for small businesses, moving location pins so people can find places.

Tip: On Google Maps for mobile, go to Your contributions in the menu and tap Uncover missing info to see which places need your expertise.

The Fact Hunter.jpg

3. The trailblazer

If a friend has ever asked you for the hottest new restaurant in town, you might be a trailblazer. These Local Guides have the pulse of their cities and love being the first to try a new place.

Loves: Adding the first review or photo to a place, putting unlisted places on the map.

Tip: Check out restaurants and local shops opening this year so you can add their first photos and get those views.

The Trailblazer.gif

4. The sage

If a review has ever helped you choose whether to stay by the sea or by the bay, you can thank a sage. No matter where they go, these Local Guides write about all the inside tips, from the best exhibits to visit to the best instructors to take at a fitness studio.

Loves: Dropping knowledge and tips in reviews, answering yes/no questions about places that pop up on your screen, responding to others via the new Questions & answers feature that shows up on Google Maps for Android.

Tip: Turn on your Location History to easily review all the places you?ve been, and make lists of your favorites.

The Sage.jpg

5. The multimedia guru

Equipped with plenty of battery packs, this Local Guide helps you see a place from every angle with 360 photos and video contributions like visual tours and on-camera reviews. 

Loves: Adding 360 photos and videos of places, going to great lengths for the perfect shot.

Tip: If you take a video on your phone, you can add up to 30 seconds of it to a place the same way you?d add a photo to a place on Google Maps.

The Multimedia Guru.gif

6. The connector

This Local Guide?s contributions go beyond Google Maps. From hosting meet-ups with other community members to chiming in on Connect (the forum for Local Guides), the connector is a friendly face for newbies and gurus alike. 

Loves: Hosting meet-ups, making lists about places to go and sharing them with friends, liking other people?s reviews.

Tip: Find out if a Local Guides meet-up is happening near you.

Connector.gif

7. The advocate

Local Guides champion many causes, from helping small businesses to making it easier for wheelchair users to get around. The advocate keeps a cause top-of-mind while they share info, like whether a place has a wheelchair ramp.

Loves: Doing good in the world for locals and visitors alike, this handy accessibility guide for sharing helpful info, watching Local Heroes videos on Local Guides? YouTube channel.

Tip: When you mark something as wheelchair-accessible, it helps families with strollers, too.

TheAdvocate.jpg

Which kind of Local Guide are you? However you want to contribute, check out your Local Guides status and places that need your knowledge by visiting Your contributions in the Google Maps menu. The more you share, the higher levels you reach as you earn points for each review, photo, and bit of info you add on Google Maps.



Our 17 favorite education moments from 2017
Wed, 10 Jan 2018 14:00:00 +0000

Editor?s Note: Happy New Year from all of us on the Google for Education team! We know you count on Google for Education in your classrooms, and we take that responsibility seriously. We remain deeply committed to bringing the best of Google to education, and to expanding learning for everyone. As we look to the year ahead, we?re looking back on our 17 favorite moments from 2017.

In 2017, we...

1. Did an hour of code with Chance the Rapper for Computer Science Education Week, surprising a Chicago classroom and announcing a $1.5 million Google.org grant to provide CS for students across Chicago Public Schools. We also released the first-ever programmable Google Doodle and invited students to code their own Google logos.

ChancetheRapper_EDU.png

2. Announced a new initiative called Grow with Google which provides access to digital tools and training for students, teachers, job-seekers and lifelong learners. As part of the announcement, our CEO Sundar Pichai visited one of the Pittsburgh classrooms participating in our new Dynamic Learning Project, a pilot that empowers educators to use technology in meaningful ways.

Sundar_GrowWithGoogle.jpg
As part of Grow with Google, our CEO Sundar visited a school in Pittsburgh to learn about their experience participating in the Dynamic Learning Project

3. Introduced a new generation of Chromebooks that let you use a stylus and flip from laptop to tablet mode. These Chromebooks have cameras on two sides and USB-C charging. New devices from Acer, Asus, HP, Dell and Lenovo come in all shapes, sizes, and price points to meet the needs of different teachers, students, schools and districts.

Chromebooks_New Generation.png
A next generation Chromebook with dual camera flipped into tablet mode.

4. Went back to school with a new resource hub for teachers. On #FirstDayOfClassroom, there?s helpful Google for Education tips and tricks from the people who know our tools the best?educators. Thanks to input from our dedicated community, we were also able to introduce the most-requested features in Google Classroom and Forms.

5. Met the Internaut, a digital citizenship guru and mascot of Be Internet Awesome, a program to help students make smart decisions online. With resources for students (including the online game Interland), educators, and families, everyone has the tools to learn and participate in digital safety and citizenship. Bonus: we also launched a Digital Citizenship and Safety course.

id101_brand-curriculumintro_beinternetsmart (2).gif

6. Celebrated International Literacy Day by creating and translating more than 1,000 children?s books for StoryWeaver, a Google.org grantee, with the #1000books campaign. Our support of Storyweaver is part of our 2016-2017 $50 million philanthropic commitment to nonprofit organizations working to close global learning gaps.

7. Were inspired by more than 11,000 girls from 103 countries during the Technovation Challenge. Finalists came to Google?s Mountain View headquarters to pitch their projects, which address issues in categories including peace, poverty, environment, equality, education, and health.

Sundar_Technovation.png
Our CEO Sundar Pichai takes a selfie with members of the winning team behind QamCare

8. Used technology to amplify student stories. Working with the non-profit 826 Valencia, Googlers helped under-resourced students create A Planet Ruled by Love using Tilt Brush. The result was a virtual reality movie that helped kids express themselves through storytelling and technology.

826 Valencia and Google

826 Valencia and Google

9. Ate funnel cakes and coded at the Illinois State Fair. We also announced our support of 4-H with a $1.5 million Google.org grant to provide students around the U.S. the opportunity to grow their future skills through computer science programming. Eat your heart out, blue ribbon marmalade.

Google-4H.png
An Illinois 4-Her on a virtual reality Expedition to see how students coded an ear tag for farmers to keep track of their wandering cattle

10. Did our research. Partnering with Gallup, we learned that students who are encouraged by a teacher or parent are three times more likely to be interested in learning computer science. 2018 resolution idea: Share more facts like these to help spur educators, families and advocates to encourage all students to learn computer science.

11. Caught Hamilton fever. With support from Google.org and the Gilder Lehrman Institute, 5,000 students from Title I schools in New York, Chicago, and the Bay Area revolutionized how we learn about American history. After a six week program, students created their own pieces that they performed on the Hamilton stage (the room where it happens).

HamiltonBurr.png
Google Expeditions helped bring students closer to Alexander Hamilton?s history.

12. Were awestruck by the innovators in Latin America who joined the #InnovarParaMi movement. From a teacher helping indigenous women in Mexico get online to a fifth grader turning water bottles into light bulbs, teachers and students across Latin America are using technology to empower a rising generation of innovative changemakers.

These sixth graders built a dispenser to make drinking water accessible. #innovarparami

These sixth graders built a dispenser to make drinking water accessible. #innovarparami

13. Showed girls that the sky's the limit for women in tech. Some examples include:

14. Saw the future through the eyes of hundreds of thousands of young artists who participated in Doodle 4 Google, a contest for students to design their own Google Doodle. Guest judges selected the 2017 winner based on artistic merit, creativity, and their written statement explaining their vision for the future. (The 2018 contest just opened, so submit your Doodle!)

Doodle collage.jpg
Connecticut 10th grader Sarah Harrison's Doodle, "A Peaceful Future" (center) was chosen as the national winner.

15. Connected live with thousands of educators and students at events around the world like Bett in London, ISTE in Texas, EduTECH in Australia, EDUCAUSE in Pennsylvania and more. We hosted an online conference?EduOnAir?in Australia, celebrated Dia dos Professores in Brasil, hosted a study tour in Sweden, kicked off a new school year in Mexico, and road-tripped across the US with ExploreEDU.

#innovarparacampeche

#innovarparacampeche

16. Traveled to a new dimension with the launch of the Google Expeditions AR Pioneer Program. With augmented reality, students can explore the solar system up close, and even tour the Roman Colosseum from their classroom. (You can still sign up to bring AR to your class!)

Expeditions AR - Bringing the world into the classroom

Expeditions AR - Bringing the world into the classroom

17. Threw our first-ever PD party to celebrate passionate lifelong learners. Throughout the week of festivities, we offered discounts on our professional development programs and hosted webinars from Certified Educators, Trainers and Innovators. Looking for a 2018 resolution? Explore our Training Center for a professional development opportunity that?s right for you.

We are constantly inspired by the powerful work of educators around the world and we are excited to continue working together this coming year and beyond. Thank you for all that you do, both inside and outside the classroom, to help prepare future generations to make the world a better (and brainier) place!



Memory machines: VR180 cameras, and capturing life as you see it
Tue, 09 Jan 2018 19:30:00 +0000

When I was growing up, my dad and even my grandfather always had camcorders stuck to their shoulders. They were our family documentarians, and were always the first to try a new gadget or gizmo if it would help us remember the places we went and the special times we shared. Decades later, I?m so grateful, and I treasure the memories they captured on Betamax and film.

cbtl1
My grandfather Henry in the backyard with his video camera.

We care about photos and videos because they connect us with important moments, special trips, and time together with the people who matter most to us. They?re abstract representations that help us remember?little visual gifts to our future selves. That being said, for most of the 20th century, photos and videos were the best you could do. They?re better than nothing, but so far from the real thing.

cbtl2
A photo of me at Disneyland at age 4, taken by my dad with a Nikon EM 35mm SLR.

But as the technology used to capture these moments has improved, the fidelity has also increased. From primitive pinhole cameras, to black and white film cameras, to color, to video, there?s been a continuous upward trajectory of resolution and quality. Today's high-end VR cameras are a big leap forward. Through immersive, stereoscopic footage, they do something more compelling than refreshing your memory?they make you feel like you're there. And the closer cameras get to capturing the moment just the way we experienced it, the closer we get to creating time machines for ourselves.

Though Google started by making VR cameras for filmmakers and professional creators a few years ago, our team has always aimed to help people capture their personal memories in VR. But in order to make this tech accessible to everyone, we had to rethink the camera itself. There are 360 cameras in the market today, but they present some challenges?they can be costly, confusing to use (where do you point it?), and the photographer always ends up in the frame. So, we focused on the pixels that matter (the ones in front of you!) with a new format we're calling VR180. And we started designing high-quality, pocket-sized cameras that anyone could use to capture VR180 experiences with just a click of a button. The first VR180 cameras will hit shelves throughout this year, just in time for you to start hitting ?record? on your own memories in 2018.

I've been using the VR180 prototypes for a while now, in places like my living room or on trips to the beach. It?s easy to share the captures with my family and friends. They can look at them on their phones, or use a viewer like Cardboard or Daydream View to step into the moment as if they were there. It?s amazing that I can film my sons jumping on the trampoline, or having a quiet breakfast, or being back where I was many years ago, on a ride at a carnival?and not only share those moments with family far away, but also relive them myself, in a way that makes me feel like I?m right back in each moment.

cbtl3
VR180 capture of one of my sons on a carnival ride, captured with one of our camera prototypes.

That?s why these VR180 cameras are so special. They do your memories justice, by enabling you to capture life the way you see it?with two eyes. When I?ve shown my family these recordings, they look into the headset, and smile. They say things like, ?This is amazing!? and, when they take the headset off: ?I only wish we had these cameras sooner.?

I couldn?t agree more.



A new way to experience Daydream and capture memories in VR
Tue, 09 Jan 2018 19:15:00 +0000

Since we launched Cardboard, our goal has been to create virtual reality experiences that are accessible, useful, and relevant to as many people as possible. With Daydream, we?ve been building a platform for high-quality mobile VR: we?ve worked with lots of different partners to bring fifteen Daydream-ready phones to market for smartphone VR. And today marks another step, with Lenovo unveiling new details about the Mirage Solo, a Daydream standalone headset we first announced at Google I/O. With it, you?ll have a more immersive and streamlined way to experience the best of what Daydream has to offer without needing a smartphone.

We've also been investing in ways to help you capture your life's most important moments in VR. We've designed high-quality, yet simple and pocket-sized cameras that anyone can use with just the click of a button. Our partners Lenovo and YI are sharing more on these, and they'll be available beginning in the second quarter this year.

Experience Daydream in a new way

The Lenovo Mirage Solo builds on everything that?s great about smartphone-based VR?portability and ease of use?and it delivers an even more immersive virtual reality experience. You don?t need a smartphone to use it: you just pick it up, put it on, and you?re ready to go. The headset is more comfortable and natural because of a new technology we created at Google called WorldSense. Based on years of investment in simultaneous localization and mapping (SLAM), it enables PC-quality positional tracking on a mobile device without the need for any additional external sensors. WorldSense lets you duck, dodge and lean, and step backwards, forwards or side to side, unlocking new gameplay elements that bring the virtual world to life. WorldSense tracking and Mirage Solo's high performance graphics mean that the objects you see will stay fixed in place just like in the real world, no matter which way you tilt or move your head. The Lenovo Mirage Solo will also have a wide field of view for great immersion, and an advanced display optimized for virtual reality, so everything you see stays crystal clear. It?s the best way to access Daydream.

Standalone
Lenovo Mirage Solo

We?re working closely with developers to bring new experiences to the platform that take advantage of all these new technologies, including a new game based on the iconic universe of Blade Runner called Blade Runner: Revelations. You?ll also have access to the entire Daydream catalog of over 250 apps, including Google apps like Street View, Photos, and Expeditions. With YouTube VR, you can watch the best VR video content, from powerful short pieces chronicling extraordinary role models to music, fashion, sports and epic journeys around the world. The Lenovo Mirage Solo also has built-in casting support, so you?re just a couple clicks away from sharing your virtual experiences onto a television for your friends and family to follow along. It will hit shelves beginning in the second quarter this year.

Capture your most important memories with VR180 cameras

Photos and videos matter to us because they help us remember the special moments in our lives. But what if you could do more than just remember a moment; what if you could relive it? That?s the idea behind the VR180 format, and we created VR180 cameras so that anyone could have an easy way to capture and then re-experience the past.

VR180

For the full effect, check out this video in a VR headset like Cardboard or Daydream View.

VR180 cameras are simple and designed for anyone to use, even if they?ve never tried VR before. There are other consumer VR cameras available today, but you have to think carefully about where you place these cameras when recording, and they capture flat 360 footage that doesn?t create a realistic sense of depth. In contrast, with VR180 cameras, you just point and shoot to take 3D photos and videos of the world in stunning 4K resolution. The resulting imagery is far more immersive than what you get with a traditional camera. You just feel like you?re there. You can re-experience the memories you capture in virtual reality with a headset like Cardboard or Daydream View. Or for a lightweight but more accessible experience, you can watch on your phone.

With options for unlimited private storage in Google Photos, you?ll have complete control over these irreplaceable memories, and you can also view them anytime in 2D on your mobile or desktop devices without a VR headset. If you want to share them, uploading to services like YouTube is easy.

LenovoStand
Lenovo Mirage Camera

Several VR180 cameras will be available soon. Different models will sport different features?like live streaming, which lets you share special moments in real time. The Lenovo Mirage Camera and YI Technology?s YI Horizon VR180 Camera will hit shelves beginning in the second quarter, and a camera from LG will be coming later this year. For professional creators, the Z Cam K1 Pro recently launched, and Panasonic is building VR180 support for their just-announced GH5 cameras with a new add-on.

YI camera
YI Horizon VR180 Camera

We?re continuing to invest in the virtual reality experiences that are compelling and relevant for everyone. Whether you access Daydream through a Daydream View and the Daydream-ready smartphone of your choice or the new, more immersive Lenovo Mirage Solo, you?ll get the best mobile VR apps and videos anywhere. And with a range of VR180 cameras to choose from, you?ll be able to capture your most important memories in a new way.

We also want to hear from you. Starting today, we're launching a VR180 contest: tell us about a special memory you?d like to capture, and we'll work with the winners to bring their ideas to life.



Vote for Google

84
Vote up google to be on the Twibs.com homepage.
Tell your followers »

About Google

News and updates from Google

google Relates To: