[Dev] Display servers (was Cynara)
casey.schaufler at intel.com
Fri Apr 11 16:57:29 GMT 2014
> -----Original Message-----
> From: Dev [mailto:dev-bounces at lists.tizen.org] On Behalf Of Jussi Laako
> Sent: Friday, April 11, 2014 6:01 AM
> To: dev at lists.tizen.org
> Subject: Re: [Dev] Display servers (was Cynara)
> Speaking of display server's I find it hilarious that keyboard, touch, mouse
> and video output somehow belong together, but audio is always outside the
This is an artifact of the relative late addition of sound to
the User Experience. Video, keyboards and mice were
well understood and integrated long (in computer terms)
before audio became normal. If real sound cards had been included
in the original (128k) MacIntosh, and moving the mouse had
been accompanied by a swooshing noise, we'd have audio
> Does Siri voice input in iOS go through display server? I don't think so.
> Why would audio be somehow special compared to touch, mouse, keyboard
> or video? How about haptic feedback or accelerometers?
Because voice processing has never caught on. Imagine a cube farm
where everyone is talking to their computers. One Loud Howard in
the room and everyone's programs look like his.
> In Tizen, pulseaudio is audio equivalent of the display server. Why doesn't
> pulseaudio hook into all keyboard, mouse and touch events?
It doesn't need to.
> Better to keep all those separate and not create "all encompassing" mega
> notreally-display -server that would be security and privacy disaster.
Right. But we all know that if security makes the mouse
jerky or the frame rate fall below 60 FPS it's outta there.
> Dev mailing list
> Dev at lists.tizen.org
More information about the Dev