John,
For the areaDetector package Tom Cobb at Diamond has written an MJPEG
plugin that compresses the video to reduce the network bandwidth. This
will work with any areaDetector camera.
Mark
-----Original Message-----
From: [email protected]
[mailto:[email protected]] On Behalf Of
[email protected]
Sent: Monday, August 30, 2010 2:46 PM
To: John Dobbins
Cc: EPICS Techtalk
Subject: Re: network video
John,
I use mpeg4 streaming with axis servers ('blade' video server a243q)
I use linux IOC/OPI with axmjpeg based iocs (for video statistics)
and gstreamer/phonon/mplayer for video display.
As far as the network, well I use a 4G LAG (4* 1G cat6 physical link
aggregated = a logical 4Gbps link) as a control network backbone.
For video streams, I use multicast protocols (1 stream of data for video
processing and display on several stations)
Now, I remember having a friendly but tense discussion about video feeds
on this mailing list.
Some of us prefer running 60Hz, 2Kx2K pixels, not-compressed video on
firewire link.
For me 25Hz, mpeg4, black and white, 640x480 over the network works like
a charm.
I configure the bandwidth usage based on video required size, refresh
rate, pixel depth, bit rate, whenever required.
With mpeg4, required bandwidth is one of the lowest of all the video
standards.
Common video server can stream at 1Gbps (constraint of the network card)
So if you put 12 cameras on 1 network interface (i.e. server), despite
variable rate, you will still get a 1Gpbs max output.
When you saturate the network card, you start pixelating.
You need to design # cards/network interface based on the 'expected'
changes in the video image.
(our cameras are always pointing in the same direction and the
background stay the same)
(so delta in temporal frames is small)
(compression intra frame is also high as 50% of the image is black)
BEWARE:
Transmission lattency can be an issue.
compression + routing/transmission + decompression is ~ 1 sec here.
but it can easily go to the 3 sec range if not careful.
My operators expect 'real-time' video, i.e. to see exactly what is
happening RIGHT NOW.
A lag > 1 sec is an issue here.
-> end station processing need to be optimized (profiling, pseudo
coloring, etc.)
-> # of switches between display stations and video sources need to be
minimized
(3 $200 switches in our case)
Overall:
Our philosophy here is 'everything' over ethernet.
Our move from coaxial to ethernet based video was very successful.
Good luck,
--
Emmanuel
On 10:54 Fri 27 Aug , John Dobbins wrote:
> All,
>
> Of those of you transmitting digital video signals as part of your
controls network I am wondering if you are transmitting video on your
EPICS subnet or on a separate network. What if any special arrangements
have you made do deal with the potentially large throughput from video
sources. (Hope the question doesn't seem overly vague, we have varying
ideas here about how video should be handled and I am curious about
other labs experience - as compared to our inexperience.)
>
> Regards and thanks,
>
> John Dobbins
> Lab for Elementary Particle Physics
> Cornell University
- Replies:
- areaDetector video (Was: network video) emmanuel_mayssat
- References:
- network video John Dobbins
- Re: network video emmanuel_mayssat
- Navigate by Date:
- Prev:
Re: network video emmanuel_mayssat
- Next:
RE: Windows XP Cygwin cannot find -lCom M . VERDIER
- Index:
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
<2010>
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
- Navigate by Thread:
- Prev:
Re: network video emmanuel_mayssat
- Next:
areaDetector video (Was: network video) emmanuel_mayssat
- Index:
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
<2010>
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
|