Table of contents
1. Why this article ?
3. The program and source code
4. General MFC SDI application for high performance
and versatile Video playback
5. A sample showing hypervideo
6. Download section
This article brings executables and VC++ projects to extoll high performance
video playback using the Microsoft DirectShow API. While the DirectShow SDK
includes a couple of samples, most of them are written in pure low level Win32.
One of the only examples written using MFC is a playback routine that simply
opens a video file then apply a render to an automatically constructed filter
graph. This filter graph builds itself the window and shows the video, thus
allowing poor interaction with the user. The aim of what I show is to :
1. bring a sample using DirectShow written in MFC using the SDI class architecture
2. bring a couple of versatile routines so that you can customize almost everything,
including user interaction, video display, windowing and so on…All these things
bring full control over the video, and this is relevant if you consider applications
featuring something else than a simple (non-interactive) playback.
3. bring a couple of ideas in the construction of a generic video class that
would feature every aspect needed by a coder in a multimedia application that
uses video in different ways. For example, the application could switch between
fullscreen mode and window mode, the application could synchronize two or more
videos, the application could allow the user to click on the video and launch
an hypermedia link, the application could change dynamically the size of the
video window or move it, the application could play video segments instead of
whole videos, and so on….
To run the executable programs, you need at least the DirectShow run-time,
no matter the version. You can download the run-time (4Mb) at the following
To build the source code, you need the DirectShow SDK, no matter the version.
You can download the SDK (90Mb) at the following address http://www.microsoft.com/directX/download.asp.
DirectShow is an SDK from Microsoft corporation.
Then, what’s the hot stuff ? There are 4 things to download. I show and explain
two samples :
The second sample is a MFC SDI application which brings a new form of interactivity
to video : while simply performing video overlay using the GDI, it lets you
move and stretch a hotspot live onto the video. You can download the executable
program and the source code.
The idea behind this second sample is to track manually the actors and characters
in the video while it’s playing, using the mouse. This sequence of hotspots
through time makes an actor sequence. This sequence, or set of sequences if
required, can be saved on disk for future use. A player could read this data
file in the context of an authoring software (or in a simple hand-written application),
and then test whatever mouse-clic in the video, and launch something, a web
page for example, when the mouse clic is performed inside the hotspot. This
association between an actor and the launched links is what we call an hypervideo
link. The concept itself is the hypervideo clip, which is video + interactive
link content. It has a lot at stake in the field of online video streaming,
and offline course-based training for example. This sample is thus the root
for an hypervideo generator. Contact me
for more details.
This sample is based on an existing DirectShow sample taken from the SDK, which
wasn’t inside the samples directory, but actually in the documentation itself.
So let me remind you what to do :
First you will have to copy/paste the source code from the DirectShow documentation.
Starting at the default page, go to the Application Developer’s guide,
then How to… then Play a Movie in a Window Using DirectDrawEx and
Multimedia Streaming. This program is an extension of the simple ShowStream
sample provided in the SDK.
The sample is a standard SDI application. The WinAPP instance opens fire and
creates a standard FrameWindow + Document + View. In our case, the FrameWindow
and the Document are almost useless. The WinApp calls a
routine on idle, provoking the display of the current video sample in the client
area of the view.
The video engine is inside the view class. It performs the elementary steps
: show a file dialog to select the video file, init the directdraw surfaces
hence preparing display and offscreens, init the multimedia stream by adding
a video stream and an audio stream to the filter graph, and call the magical
render routine to automatically build up a compatible filter graph that will
parse, multiplex and then render both video and sound to the approriated devices
on your machine.
The MovieView class pumps messages while the video is playing, allowing you
to pause the video by pressing the space key, and restart it, and allowing
you to stretch the main window, or even to switch to "fullscreeen"
by double-clicking on the title bar.
The classes MovieApp, Mainframe, MovieDoc and MovieView are a good reusable
set of classes.
RenderToSurface() routine is a specific routine. In order
to allow future interaction the backbuffer which receives the video sample is
not necessarily simply blitted to the primary visible surface into the view.
In fact, the routine is much like
bSimpleBlit is set to
FALSE, the sample-backbuffer
is blitted to a copy-backbuffer, and then the copy-backbuffer is stretch-blitted
to the visible surface. This allows you to apply filters, effects, GDI overlay,
and so on the copy-backbuffer before it is blitted. Why not directly on the
sample-backbuffer sir ? Because if the video is turned in PAUSE mode, the sample-backbuffer
is not refreshed and any applied image processing results in dull traces.
I guess it’s time to test the sample.
The second sample is an extension of the first sample. The
is a good place to perform overlay, hence I draw a square hotspot using the
GDI and stretch-blit everything towards the visible surface.
Now let me tell you that I also check mouse buttons and moves. If the user
clicks with the left button inside the hotspot, and then drags the mouse,
then the hotspot moves. If the user clicks with the right button and
then drags the mouse, then it stretches the hotspot.
The user can play/pause the video using the space key. The user can
show hatches inside the hotspot by pressing the S key alternatively.
When I capture the mouse, the coordinates are given in screen coordinates.
If I use the GDI to get the DC from a DirectDraw surface, I must remember that
such a DC is basically given in a raw MM_TEXT mode. Thus I have to manage a
system of physical / logical coordinates in order to show the hotspot where
it should be seen !!! This explains the somewhat couple of calculations.
This MovieView class is left open. I have implemented in this release only
a couple of helper functions. There are so many to implement that it requires
quite a lot of articles, discussion with you and of course…fun !!
Date Last Updated: February 3, 1999