Nevíte kam na výlet? Tato aplikace je Váš rádce pro využití volného času. V jednotlivých kategoriích naleznete aktuální tipy na výlety, akce pro děti, jarmarky, výstavy...(read more
That last project in our v2 week comes from Vangos Pterneas where he shows off the new background removal, replacement capabilities...Other recent Gallery Posts for Vangos Pterneas;
Throughout the past few days, I got many requests about Kinect color to depth pixel mapping. As you probably already know, Kinect streams are not properly aligned. The RGB and depth cameras have a different resolution and their point of view is slightly shifted. As a result, more and more people have been asking me (either in the blog comments or by email) about properly aligning the color and depth streams. The most common application they want to build is a cool green-screen effect, just like the following video:As you can see, the pretty girl is tracked by the Kinect sensor and the background is totally removed. I can replace the background with a solid color, a gradient fill, or even a random image!Nice, huh? So, I created a simple project that maps a player’s depth values to the corresponding color pixels. This way, I could remove the background and replace it with something else. The source code is hosted on GitHub as a separate project. It is also part of Vitruvius.
How background removal worksWhen we refer to “background removal”, we need to keep the pixels which form the user and remove anything else that does not belong to the user. The depth camera of the Kinect sensor comes in handy for determining a user’s body. However, we need to find the RGB color values, not the depth distances. We need to specify which RGB values correspond to the user’s depth values. Confused? Please don’t.Using Kinect, each point in space has the following information:
The depth camera gives us the depth value and the RGB camera provides us with the color value. We map those values using CoordinateMapper. CoordinateMapper is a useful Kinect property that determines which color values correspond to each depth
- Color value: Red + Green + Blue
- Depth value: The distance from the sensor
Aktualizovaná verze Service Packu 1 pro SharePoint 2013 byla včera uvolněna ke stažení. První verze Service Packu byla k dispozici již před nějakým časem, bohužel obsahovala několik záplat, které po instalaci...(read more
Today the Kinect for Windows team highlights two companies that are building their business on the Kinect for Windows v2.
BUILD—Microsoft’s annual developer conference—is the perfect showcase for inventive, innovative solutions created with the latest Microsoft technologies. As we mentioned in our previous blog, some of the technologists who have been part of the Kinect for Windows v2 developer preview program are here at BUILD, demonstrating their amazing apps. In this blog, we’ll take a closer look at how Kinect for Windows v2 has spawned creative leaps forward at two innovative companies: Freak’n Genius and Reflexion Health.Freak’n Genius is a Seattle-based company whose current YAKiT and YAKiT Kids applications, which let users create talking photos on a smartphone, have been used to generate well over a million videos.But with Kinect for Windows 2, Freak’n Genius is poised to flip animation on its head, by taking what has been highly technical, time consuming, and expensive and making it instant, free, and fun. It’s performance-based animation without the suits, tracking balls, and room-size setups. Freak’n Genius has developed software that will enable just about anyone to create cartoons with fully animated characters by using a Kinect for Windows v2 sensor. The user simply chooses an on-screen character—the beta features 20 characters, with dozens more in the works—and animates it by standing in front of the Kinect for Windows sensor and moving. With its precise skeletal tracking capabilities, the v2 sensor captures the “animator’s” every twitch, jump, and gesture, translating them into movements of the on-screen character.What’s more, with the ability to create Windows Store apps, Kinect for Windows v2 stands to bring Freak’n Genius’s improved animation applications to countless new customers. ......Reflexion Health, based in San Diego, uses Kinect for Windows to augment their physical therapy program and give the therapists a powerful, data-driven new tool to help ensure that patients get the maximum benefit from their PT. Their application, named Vera, uses Kinect for Windows to track patients’ exercise sessions. The initial version of this app was built on the original Kinect for Windows, but the team eagerly—and easily—adapted the software to the v2 sensor and SDK. The new sensor’s improved depth sensing and enhanced skeletal tracking, which delivers information on more joints, allows the software to capture the patient’s exercise moves in far more precise detail. It provides patients with a model for how to do the exercise correctly, and simultaneously compares the patient’s movements to the prescribed exercise. The Vera system thus offers immediate, real-time feedback—no more wondering if you’re lifting or twisting in the right way. The data on the patient’s movements are also shared with the therapist, so that he or she can track the patient’s progress and adjust the exercise regimen remotely for maximum therapeutic benefit.
Window Wednesdays are likely going to be pretty focused on "Universal apps" for a while. I know, funny that given it just came out, that it's pretty exciting and it allows us to build app's that run on devices from 2 feet to 10 (when XBox One support is released).Today, we highlight a project from Jeff Prosise
where he shows us how to took an existing Windows 8 app and converted into a Universal App, and customizing the experience for each device.Past times we've highlighted Jeff;
A couple of weeks ago, I posted about the new universal app model that Microsoft introduced at BUILD 2014. In that post, I introduced a version of Contoso Cookbook that runs on Windows 8.1 and Windows Phone 8.1. That sample covered the basics of universal apps, including using a shared project to share code and resources between Windows and Windows Phone projects. Today I’d like to take it one step further by introducing a more sophisticated universal app named MyComix Reader, or simply “MyComix.” It’s an updated version of a sample that I published in 2012. MyComix now gets its data from Azure. And it now runs on Windows and Windows Phone, enabling it to support a variety of form factors, including PCs, tablets, and, of course, phones.The screen shot below below shows the app’s start page in Windows and Windows Phone. Same data; just a different way of rendering the data on different devices. The app supports a 3-page navigation model, and it supports search. To see for yourself, download the Visual Studio solution and run it on your development PC. Remember that you’ll need to have Visual Studio 2013 Update 2 installed to do so.
Sharing Resources through PCLsRecall that a freshly created universal-app solution contains three projects: a Windows project, a Windows Phone project, and a shared project that doesn’t target any particular platform, but that contains files shared with the other two projects via linking. MyComix contains a fourth project: a Portable Class Library (PCL) named MyComixReader.Controls that I added in Visual Studio. That project includes a pair of custom controls that I use in the Windows app and the Windows Phone app: one named MagicImage, and another named CoverFlow. (To see the MagicImage control at work, tap one of the comic books to see a detail, and then tap the comic-book cover image.) I originally developed the CoverFlow control for Windows after modifying the source code for a Silverlight CoverFlow control posted on CodePlex long ago. I was pleasantly surprised to find that the control
Díky ukončení podpory Windows XP a uvedení Windows 8.1 Update zůstala trochu stranou mediálního zájmu i ve stejnou dobu uvedená velká aktualizace pro Windows Server 2012 R2. Pojďme se tedy podívat...(read more
Shlédněte video z přednášky Štěpána Bechyňského ze dne 22. dubna na téma Internet of Things a Windows Azure:
Vydáno 23. dubna 2014
Dalším způsobem, jakým se vaše webová aplikace dnes může vyrovnat aplikacím desktopovým, jsou systémová upozornění. V článku vám představíme, jak je vytvářet a zasílat uživateli.
This week is going to be a v2 theme week, with Friend of the Gallery, Marcus Kohnert
kicking it off, showing off his Rx skills with the Kinect.Other times he's been featured;
A few weeks ago I was finally able to get my hands on to the new Kinect for Windows V2 SDK. There are a few API changes compared to V1. So I started to port Kinect.Reactive to the new Kinect for Windows Dev Preview SDK and Kinect.ReactiveV2 was born.Kinect.ReactiveV2 is, as it’s older brother, a project that contains a bunch of extension methods that should ease the development with the Kinect for Windows SDK. The project uses the ReactiveExtensions (an open source framework built by Microsoft) to transform the various Kinect reader events into IObservable<T> sequences. This transformation enables you to use Linq style query operators on those events.Here is an example of how to use the BodyIndexFrame data as an observable sequence.You’ll also get an extension method called SceneChanges() on every KinectSensor instance which notifies all it’s subscribers whenever a person entered or left a scene....Please be aware that “This is preliminary software and/or hardware and APIs are preliminary and subject to change”.[Click through for the entire post]Project Information URL: http://passiondev.wordpress.com/2014/04/07/kinect-reactivev2-rx-ing-the-kinect-for-windows-sdk/Project Download URL: https://www.nuget.org/packages/Kinect.ReactiveV2/Project Source
Pro zobrazení informací o svém účtu se musíte
Pokud ještě nemáte svůj účet, tak si ho prosím