Dead White Guys
Sundays 6-9am on 88.3 WCBN FM Ann Arbor
Monday, February 25, 2013
Thank you!
Thank you to everyone who pledged during our show and throughout the 2013 WCBN Fundraiser. The station raised $35,216.65! Your support allows us to continue serving you with interesting and unique programming found nowhere else.
Tuesday, February 5, 2013
WCBN Fundraiser: February 8–17
Join us for our 2013 On Air Fundraiser!
Dead White Guys will produce two fundraiser shows: February 10th and February 17th. Enjoy the music, and please consider making a monetary donation to maintain our program's offbeat approach to classical music.
Saturday, August 18, 2012
Joseph Martin Kraus, the Swedish Mozart
It's an unfair characterization, of course, but when Joseph Martin Kraus's name is mentioned, the comparison to Mozart usually follows. Superficially, there is some justification; the two were born five months apart in 1756, and died a year apart. Stylistically, though, Kraus's symphonies more resemble Haydn's than Mozart's in their conciseness and humor.
Kraus was born in Germany and moved to Stockholm, Sweden in 1778, where King Gustav III was busy putting an end to the Age of Liberty, a period when the monarchy was minimized and power entrusted to a parliament. Gustav promoted instead the benevolent monarchy, enacting economic and social reforms, and spending lavishly on the arts. Kraus benefited from the King's artistic bent and began composing for the Swedish court. His symphonies were written during his employment there. Many of the symphonies are either lost or mis-attributed. The ones that survive are mostly three-movement works—another difference with Mozart, whose most famous symphonies have four movements—and are not widely known.
I'm on a mission to introduce more classical music fans to the music of Joseph Martin Kraus. Petter Sundkvist and the Swedish Chamber Orchestra have recorded a dozen of Kraus's symphonies, and I will be featuring them in episodes of Dead White Guys that I host on a rotating basis.
Kraus was born in Germany and moved to Stockholm, Sweden in 1778, where King Gustav III was busy putting an end to the Age of Liberty, a period when the monarchy was minimized and power entrusted to a parliament. Gustav promoted instead the benevolent monarchy, enacting economic and social reforms, and spending lavishly on the arts. Kraus benefited from the King's artistic bent and began composing for the Swedish court. His symphonies were written during his employment there. Many of the symphonies are either lost or mis-attributed. The ones that survive are mostly three-movement works—another difference with Mozart, whose most famous symphonies have four movements—and are not widely known.
I'm on a mission to introduce more classical music fans to the music of Joseph Martin Kraus. Petter Sundkvist and the Swedish Chamber Orchestra have recorded a dozen of Kraus's symphonies, and I will be featuring them in episodes of Dead White Guys that I host on a rotating basis.
Friday, June 8, 2012
Profile: Robert Ashley
The union of words and music is a subject
of much discussion in music philosophy, and as a “problem,” it has provided millennia
of creative exploration and reinvention.
There are hundreds, thousands of works that comment some way or another
on the variety of ways speech is employed in music. Our Western musical tradition – let’s not get concerned with
cavemen and bone flutes yet – began with the Greeks, whose use of music was
primarily as an amendment to, an enhancement of speech in ways that reflects
and imbues in the listener a moral character (according to Aristotle,
anyway). The most popular trend
since then, as exemplified by, well, songs
in any of their forms, is to depart from the natural inclinations of spoken speech into drastically,
unnaturally contorting it into pitches, pacing, inflections, melisma, intervals
and all that that don’t crop up in normal conversation. Nobody normally talks with a robust
vibrato, or spans two octaves recounting a bad meal they had at a
restaurant. Song is an invention. Otherwise, speech in musical work is typically reserved it
for simple narration in programmatic or dramatic works.
Ahem. Moving on. It’s
a subject that annoys me persistently, so we’ll leave the larger discussion about
that for another time, and here get to my real subject, a gentleman named
Robert Ashley. As concerns the
spoken word in music, he is a remarkable person for purposefully
un-musicalizing musical speech, or parodying it, really, kind of making musical
speech a deliberately unnerving experience. Whereas other contemporary composers, enamored of speech
sampling and spoken text though they are, still tend to exaggerate the cadences,
rhythm, and melody of spoken speech for positive emotive reflection or
representation of the text, or just to be nutty and modern, Robert Ashley’s
libretti are almost entirely spoken with some odd quality of inappropriate dispassion,
often through a spacey synthesizer. It comes close to baroque-classical
recitative, as an analogy, but sometimes he goes to extremes to restrict the
cadences, rhythm, and melody of spoken speech towards producing a strained, psychedelic,
emotionless declamation. Ashley’s
voices are bizarre, and when set to alternately banal and hallucinatory and violent
and philosophical prose, almost as run-on sentences, undermine any certain idea
of what you’re supposed to feel about either the speaker or the text. All I can
say is that when it’s not sounding anhedonic, or goofy (chalk this up partially
to his often-used Blade Runner synths),
it’s really, really creepy. For something that by all musical means is not very formally
intricate or profound or emotional it certainly gets a profound emotive
response from me, of perplexity and languor and sometimes something like
curious horror, like looking at a photograph of a murder. It is disturbing to hear people speak
like this ever in normal life. It
is the speech type of a sociopath.
We can normally forgive narrative or poetical songs of their musical
conceits. We know that it’s
artistry, not actual talking. It’s
music. It sounds pretty or
dramatic. But with Ashley’s music,
that bridge over to song, or recitative even, is very often hardly crossed, and
to eerie effect. His characters
can describe how a homeless friend got his legs blown off and was given
morphine by a pizza delivery boy, and in the next minute talk about French
fries, fast cars, and flirting with guys, and then not far after that riff for
nine minutes on a story of kids witnessing gay sex in the park (Dust, 1998), all in a fatalistic “so it
goes” manner. I mean...what?
Enough talking about it. Here’s a video of perhaps his most
notable work, the television opera Perfect
Lives (1978), to just show you what I mean. (Actually, he is kind of credited with creating the genre of
a television opera, though I think he wasn’t the first to do it. I’ll look this up later.) This evidently made John Cage and
Spalding Gray pee their pants with delight.
As an experimental musician and
avant-garde dramatist, now eighty-plus years old and still composing, Robert
Ashley continues to command attention. He’s a somebody. But of incidental concern to us is to know
that he is a product of Ann Arbor.
Or, perhaps, Ann Arbor today is in some small degree a product of his.
Robert Ashley was born in Ann Arbor, and
stuck around long enough to get a bachelor’s degree in music theory in 1952
from the University of Michigan.
After getting a graduate degree from the Manhattan School of Music, he
came back to Ann Arbor to work in U-M’s Speech Research Laboratory, a now
evidently defunct branch of the college.
Combined with a self-awareness of his mild form of Tourettes (note: I
can’t find much corroborative primary evidence for this), I can’t help but
speculate that his intimacy with the minutia of speech and its interaction with
the mind of the speakers and the spoken-tos influenced Ashley’s artistic style. His music is almost entirely vocal, and
he is smitten with its delivery in modern artistic forms. He has even tried to
incorporate Tourettes events in performances of his works (again, I think this
evidence may be anecdotal and I’ll still look this part up, but it’s fun to
think about for the time being).
To perform with a neurological
disorder as an instrument. Yeah,
weird. And he may have left a bit of that weirdness behind, too.
I’m a youngin, but I might understand the
progressive Ann Arbor of today is the residue of the late 1950s and 1960s. Its
character today is pretty benign and Bobo compared to the radicalism of the
anti-Vietnam, liberal, anti-segregation, youthful activism back then, which
also spilled into fervent experimentation in the arts. Partially, the culture events that
began back then were purposeful counter-culture political statements,
especially in the featuring of progressive jazz and blues artists in the spirit
of cultural outreach in opposition of segregation. The avant-garde art scene, vestiges of which might be inferred
in Ann Arbor’s arts culture today – and so I do because I can – had its origins
in the ONCE Festival, a gathering of new performance art, film, and music, orchestrated
by Robert Ashley and several others native Ann Arborites under the name of the
ONCE Group, as they called themselves from 1954-1959. (Let me emphasize that Ashley was not alone in this, but had
the most artistically popular career afterwards.) Ashley cites his time in the ONCE Group as formative on his
career. During their time together
they toured the US, performing improvised theatrical and musical works, and
works on homemade electronic instruments in an age where the Moog synthesizer
had only just come into being, and Ashley remained continually an early adopter
of electronic manipulation and unconventional use of media, such as the
aforementioned television opera.
The ONCE Festival ran annually for six years beginning in 1961, and was
the progenitor of the Ann Arbor Film Festival, which began in 1963.
Neat history, hm?
Wednesday, May 23, 2012
Event: NIME Conference 2012, Concert 2
Energy and attention waning a bit, I'm thankful at the moment that the profusion of media on the Internet can give me an excuse to neglect the play-by-play of last night's concert. Instead, you can experience, at home, in your dressing gown, the program yourself through the magic of videos and links. Commentary, if coming at all, will come later.
No media available for this one, but my Wii reference in the last post came true! RMM had strapped to each arm a Wii controller, used to modify sounds played on and in the piano, through the microphone, and brass singing bowl.
Check out their website, which includes video, at http://www.themindensemble.com/. This particular performance does deserve a description, especially for its spectacular projection work, but later.
Of
Dust and Sand – Per Bloland
Jack
Walk – Scott Deal
Desamor
I – Roberto Morales-Manzanares
Flue
– Bill Hsu
(n/a)
Vocalise
– Sergei Rachmaninoff, Medley – Brian Wilson (arr. for theremin
cello)
Thought.Projection – Robert
Alexander, David Biedenbender, Anton Pugh, Suby Raman, Amanda Sari Perez, Sam
L. Richards
Check out their website, which includes video, at http://www.themindensemble.com/. This particular performance does deserve a description, especially for its spectacular projection work, but later.
Eigenspace
– Mari Kimura, Tomoyuki Kato
Where
Are You Standing? – Bongjun Kim, Woon Seung Yeo
The version performed last night differed from this in two ways. First, instead of the performer's icon chiming when having located a target performer, it is the target performer's icon chiming at having been located. Second, the stage environment was blocked into concentric rings like a bulls-eye, also projected on the board. Each of those rings corresponded to a tone, a drone activated when the performer stands in it, so that absolute position mattered to the overall background sounds as well. If I remember correctly, the outer rings were higher pitched and the center lower.
Tuesday, May 22, 2012
Event: NIME Conference 2012 Concert 1, Part 1
Good fellows, forgive me. While my goal is not journalistic
excellence, I perhaps let my imagination and hopes as expressed in the last
post supersede reasonable expectations of a conference. And, having neglected to do my homework
on it (and leaving aside those other minor factual errors that I’m too lazy to
correct) missed the one sadly relevant thing this conference shares with many
others: it’s really expensive.
Therefore, no conference for me.
But the evening concerts are cheap, so I
went to the concert last night at the Lydia Mendelssohn Theater and have much
to say about it. Unfortunately,
the experience tempered my previous enthusiasm quite a bit. But, contrarian as I am, my new
favorite emotion is ambivalence. Let
me talk it out through the descriptions and critiques of the night’s works.
Floating
Points II – Matthias Schneiderbanger, Michael Vierling
This work is what they called a
“collaborative performance” of two performers, one enclosed by a Sensor-table
and the other gloved in a Chirotron, both uniquely created instruments, of a
kind. The Sensor-table is a table
with upward-pointing sensors that detects object ranges, here the hands of the
performer, and the object motions are then translated through a computer into
sound. (I assume the principle is
the same as in this sensor LED table below).
It appeared that each sensor corresponded
to a different suite of synthesized sounds, of booms, crashes, metallic
jingling, that the performer commanded all around his body. A direct downward
fall of the hand resulted in a percussive activation with decay, a hovering
hand a sustain, and most interestingly, an upward pinch, as if gingerly
plucking grains of sand from a beach to let them fall, sucked the sound away
like a percussive strike reversed.
I am not sure whether the flat area of hand-object had any relation to
the sound created.
The Chirotron is a glove that very
literally the direction the sound from the Sensor-table comes from. Wherever the operator pointed, the
sound would appear from the speakers in that area of the room, surround-sound
to the max. The performance
involved both of those, and the Chirotron’s performer would sweep his hand
across the room and the sound would travel as well, but of course without the
Doppler effect of an actual traveling sound-producing object. The Chirotron is a neat analogue to the
conductor, who points to a part of the orchestra to get that section to
respond. However, in this case
it’s the actual performance environment that responds and not the activation of
the instrument itself.
Floating
Points II demonstrated a
fun kind of manipulation; still, together the Sensor-table and Chirotron
represent an evidently persistently capturing idea of playing through
gesticulation and body movement without any actual physical contact that is
still, well, kind of old. I mean,
remember the Theremin? That was an
instrument of essentially translating gesticulation through a magnetic field
while this is infrared. The medium
is different, the concept the same.
And the analogue of a conductor mentioned before is evident, although
what is new is the idea of conducting of electronic sound live.
As a performance, while the Chiro-tron
showed a very literal and concrete role to play, it was very difficult to
understand what control the Sensor-table’s performer held. The raw classes of sounds apparently
were synthesized pre-performance, and the performer’s job was to activate them
(that’s a word I’m going to use a lot – activate) but with very little real
control. There appeared no ability
to manage pitch, just the timing of strike and decay. The appearance of a many wires and a Macbook showed that the
origin of the sounds themselves came
from elsewhere, and even in performance the performer seemed disconnected in
more than one way from the actual making of music. It didn’t help very much that the sounds they chose to
perform were ugly and the composition itself had nothing like structure, much
less architecture, to it. While an
interesting a demonstration of technology, it does not work so much as an
effective medium for music yet.
Still, I must give much credit to a very
effective demonstrating of how manipulation of sound direction and performance
space vastly changes how we could experience performances, if only they were
composed to make advantages of sound origins than the stage just in front
you.
Oh wait, that’s also an old idea, dating
back to the Italian Renaissance and the era of Ars Nova, where cathedral acoustics
and balconies were exploited to have separate sections of performers that would
have call-and-response concertos from across the rooms and thus a directional
perfomance.
You can watch a performance of Floating Points below and judge for
yourself.
Violent
Dreams – Hans Leeuw, Diemo Schwarz
Uh oh. Another Macbook.
I have a feeling that the most used musical instrument here is going to
be a computer.
Double uh oh. The word “improvisation” in the program. It’s going to be a long night.
The electrumpet, performed by Hans,
appears to have two mouthpieces and two functions, one as a trumpet making
acoustic trumpet sounds, and with the other attached set of buttons an
instrument that senses the column of air produced and translates that into
electric signals. While making
many ugly noises, as a concept it is still interesting that it seems that some
physical elemenets of the original physical vibrating column of air are still
retained in the electronic interpretation, and it’s not just a matter of
button-pressing.
The other “instrument” suite was an array
of tablets and programming called CataRT which Diemo operated by touch, again
appearing to really be controlling the initiation and manipulation of sounds
pre-recorded on that damn Macbook.
As would make sense, taps produced percussiveness, swipes of the finger
across trackpads produced swipes of sound, etc. Here, through his iPad-like device we evidently see now the
introduction of the accelerometer; instead of operating by touching the pad or
by the pad sensing movement of the performer, the pad manipulated sounds by
translating its own movement.
Sometimes this did produce analogous imitations of manipulations. Jiggling the pad wavered the noises, shifting
it one way shifted the timbre, shifting it the other did something else, like
he was playing tilt ball in secret.
More on accelerometer use in the next performance.
Yet, and this will be a persistent
problem throughout, the performance was not of a musical piece but a
demonstration of musical instruments, and an improvisation of displeasing
noises played on two seemingly ridiculously complicated new instruments was
hardly an endorsement of their potential to play controlled, composed
music. What would the sheet score
look like for CataRT, for instance?
Probably like a cheat-code for Mortal Combat. Up-up-down-left-A-down-B-B-Up, and then destroy your
audience by ripping their spines out.
Again, you can check out a similar
performance below.
4
Hands iPhone – Atau Tanaka, Adam Parkinson
Each of the two performers holds in each
hand an iPhone, again using them as the interface with which to manipulate
pre-recorded sounds. The main
medium here was motion, and now I get to talk about accelerometers. These sensors sense the force of
acceleration on its object in all directions, though I think people would
prefer to call them the axes x, y, and z.
The object set in motion or coming to a stop exerts forces in those axes,
which can then be translated as instructions to the sound to wiggle around,
crescendo, change timbre and such.
Set flat and still, gravity forms a baseline so that accelerating force
from gravity alone also sends a signal when the sensor is tilted, its pitch and
roll and the position of the object can be known and, again, translated as
instructions to the sound. I’m not
sure if the spatial position of the sensor has itself any effect, for instance
if the same motions 1ft off the ground produce the same sounds as 10ft off the
ground, but I don’t think it does.
In any case, the concept of being able to translate physical forces
experienced by an object – not inflicted on the object, like striking a string
or blowing into a tube – into signals is really cool. I feel it’s analogous to being able to hear physics, as if
(synesthetes know about this) color could be translated into taste, or light
into smell, or temperature into sound.
Oh, but wait. Where have I seen this tech before?
Sigh. Yet again the performance itself
was not a ringing endorsement of the technology, and the technology not a
ringing endorsement of its creator’s attunement with pop culture. First, there are enormous complications
of performance. The motions of the
performers don’t always translate reliably, so, again, the performer often
looks disconnected from the sounds produced by his device. Then, in this case, any motion at all
is a signal, even if unintended, so the performer is limited to very delicate
and cautious motions. No fiery
performance here. (Of course,
there could be a computer program, or say, a console and software, maybe call
it Nintendo, which can determine the acceptable range of motions and sounds in
any instance.) And also, because
the performer only has so much space and ability, the accelerometers are
confined foremost by the performer’s own range of motion and movements. What happens if you want to accelerate
upwards but are already holding the device above your head? You have to move downwards first even
if you don’t want to. Really, the
performance space is the reachable sphere around the performer and good old
physical ability. In this case,
then, it seems really ironic to see such a massive amount of computing power in
ones hands and then limit it to such a cumbersome and circumscribing and clumsy
method of having to handle it.
Plus, and the pre-recorded sounds they
were manipulating were, once again, at dangerous volumes and deadly horrifying
to listen to.
Aphasia
– Mark Applebaum
I’ll leave you with this. It’s a piece of performance art, this
one, and does not actually demonstrate live creation of sound. Mark performs his own gesticulations, a
sign-language of sorts, in-time with a recording of vocal chop-suey. I think it
shows the sort of control music engineers wish to aspire to with gesticulated
performance that has not nearly yet been achieved with previous efforts.
Wednesday, May 16, 2012
Event: NIME Conference 2012
During the Events Calendar on Dead White
Guys last Sunday, I mentioned the New Interfaces for Musical Expression (NIME)
conference coming to U-M next week, and would now like to tell you more about
it. Hopefully, you’ll be convinced
you should attend, because if you don’t attend it will prove one of the great
mistakes of your life, one of the great missed opportunities you’ll tearfully
recount to your only, reluctant, there-because-s/he-would-feel-bad-otherwise
friend while decaying on your deathbed after a disappointing existence spent
shunted off from intellectual exploration. Trust me, go and your corpus callosum will thank you for it.
Did I say conference? That’s unfortunate, because most
conferences aren’t fun. The
presenters usually compete with Saltines for dryness. Everyone in attendance
seems as if instead of Sports Illustrated there’s the Proceedings of the
American Society for Plant Husbandry in their toilet-side magazine racks, and people
only really go to wipe sweat on each other’s hands and get blind drunk on business
expenses afterwards. But I hope
and expect this one is different.
NIME is three days worth of presentations,
papers, and posters by visiting boffins and U-M affiliates on new ways to
perform, produce, listen to, analyze, and interact with music and sound
creation. For music geeks like you
and me, that’s already promising, because dear heavenly Hostess we really don’t
need to hear anything more about the influence of the metronome on late
classical-Romantic performance practice, or what actually killed Mozart. (Answer: aliens.) There’s the future and experimentation
to think about, and the future is really, really interesting, and what people
did to innovate with interfaces in musical expression in the recent past is
still incredible.
And now, a digression. For thousands of years, up through the
19th century, musical technology was confined to the tangibly
physical, and most physical innovations to sounds and instruments were
innovations and refinement of shapes, materials, and mechanical processes –
better alloys, piston and cylinder valves, slowly perfecting resonance chambers
and such. And in fact, most of these innovations (along with other reasons I
won’t go into here) were accompanied by a narrowing diversity of instruments
and sounds in art music. Think
about how few instruments actually appear in most acoustic music, and how many
became extinct in the meantime.
There’s a reason we don’t hear traditional crumhorns and glass
harmonicas and melodeons anymore.
They didn’t live up. They were
crap to play and crap to listen to and far too limiting in performance compared
to what replaced them.
The addition of electricity and electric
recording media reversed this trend significantly. Electric pickups, synthesizers, soundboards, software, tapes,
synthaxe drumitars, all these things popping up in the last century and
dominating how we create and process sounds. It’s freaking amazing to think what the possibilities can
be, limited not by chemistry and physical laws and physical abilities so much
but now by computing power, creativity, and the limits of the human
hearing. (Even that last one is
debatable. Would a symphony
written in the range above human hearing still be music?)
And these last – no, latest – frontiers
are what NIME appears to explore.
If you hadn’t suspected, yes, I did wet myself a little while writing
that last paragraph.
Here’s a sampling of the titles of
posters, papers, and demonstrations listed in the programs for this year:
“Temporal Control in the EyeHarp
Gaze-Controlled Musical Interface”
“A New Keyboard-Based, Sensor-Augmented
Instrument for Live Performance”
“SenSynth: a Mobile Application for
Dynamic Sensor to Sound Mapping”
“The Planetarium as a Musical Instrument”
“Movement to Emotions to Music: Using
Whole Body Emotional Expression as an Interaction for Electronic Music
Generation”
“Tweet Harp: Laser Harp Generating Voice
and Text of Real-time Tweets in Twitter”
“AuRal: A Mobile Interactive System for
Geo-Locative Audio Synthesis”
“FutureGrab: A wearable synthesizer using
vowel formants”
Why aren’t you excited?!?! I’m excited.
Best yet, there are two sessions of
concerts each night of the three days, which appear to be demonstrating these
amazing things. Heck, the nighttime
concerts for Tuesday and Wednesday are at Necto for goodness sake. Maybe they want to keep the tradition
of having the opportunity to get blind drunk at these things. I will do my best to be in attendance for
the conference and concerts and, if feeling enterprising and sober enough, will
post retrospectives after each day to this blog. Stay tuned.
Subscribe to:
Posts (Atom)