ヴィジェガス オロズゴ ジュリアン アルベルト

ヴィジェガス オロズゴ ジュリアン アルベルト JULIAN Alberto Villegas Orozco

貧彈縮娩

侭奉
コンピュ`タ尖垢僥親/秤鵐轡好謄猾Р翠T
貧彈縮娩
E-Mail
julian@u-aizu.ac.jp
Webサイト
https://onkyo.u-aizu.ac.jp/

縮圄

毅輝親朕 - 寄僥
LI10 Introduction to Multimedia Systems
IT09 Sound and Audio Processing
FU14 Intro. to Software Engineering (exercise class)
FU15 Introduction to Data Management (exercise class)
毅輝親朕 - 寄僥垪
Spatial Hearing and Virtual 3D Sound
Introduction to Sound and Audio
Digital Audio Effects
Multimedia Machinima

冩梢

冩梢蛍勸
冱Z僥
ソフトウェア
岑秤I尖
ヒュ`マンインタフェ`ス式びインタラクション
エンタテインメント式びゲ`ム秤麪
I am interested in spatial sound, audio signal processing, phonetics, psychoacoustics, and aural/oral human-computer interaction.
待s
2021 C Senior Associate Professor, University of Aizu.
2013 - Associate Professor, University of Aizu.
2010 - Researcher, Ikerbasque - University of the Basque Country.
2010 - Ph.D. in Computer Science and Engineering, University of Aizu.
F壓の冩梢n}
PSYPHON: Psychoacoustic features for Phonation prediction
冩梢坪否キ`ワ`ド
Aural/oral human-computer interaction, real-time programming, visual programming
侭奉僥氏
? Audio Engineering Society

? Acoustical Society of Japan

? Acoustical Society of America

? IEEE

パ`ソナルデ`タ

箸龍
Running, playing music, etc.
徨工r旗の
嚴帑巻を恬ること
これからの朕
よりよい弊順をBくこと
恙嘔の
やる櫃あれば採でもできる
柎i
A Life on Our Planet: My Witness Statement and A Vision for the Future by David Attenborough

Guns, Germs, and Steel: The Fates of Human Societies by Jared Diamond

The Brain That Changes Itself: Stories of Personal Triumph from the Frontiers of Brain Science by Norman Doidge

The Anthropocene Reviewed by John Green
僥伏へのメッセ`ジ
We are always looking forward to doing collaboration research; email me if you are interested. We¨re particularly interested in Master and Doctoral students.

麼な冩梢

Sound and Audio Technologies

I have spent over two decades as an information scientist, dedicating the last ten years to sound and audio research at the University of Aizu. Throughout my career, I have published over 150 articles in top journals and conferences, and I hold three patents related to sound and audio. My passion for research has also led me to supervise more than 30 undergraduate students, a dozen Master's students, and I am currently guiding two Ph.D. students.

We call our lab "Onkyo." In our lab, we explore sound as a powerful medium for transmitting information between humans and machines. Our research relies heavily on machine learning methods and focuses on three key areas:

Spatial Sound: In an age of information overload, where vision is saturated with data from daily gadgets, we seek to utilize spatial (3D) sound through loudspeakers or headphones to convey vital information. Our interests lie in spatial data compression, auditory display personalization, and the development of multi-sensory interfaces. By enriching human experiences with audio interactions, we strive to push the boundaries of human-machine communication.
Applied Psychoacoustics: The discipline of psychoacoustics studies how sound in the physical world is perceived and processed in our minds. Understanding these perceptual processes enables us to identify when the brain's processing capabilities are outpaced by hardware. This opens doors to innovative interfaces, such as near ultrasound communication and advancements in speech communication technologies.
Applied Phonetics: Phonetics, the study and classification of speech sounds, plays a crucial role in human-machine communication. Through collaborative research, we investigate the effects of noise on speech, multilingualism, articulation, and phonation phenomena (the production of speech sounds). By comprehending speech production and perception in diverse settings, we seek to improve speech technologies for seamless human-machine interactions.

As we delve into these research areas, our mission is to contribute meaningfully to the field of human-machine interactions. By harnessing the potential of sound, we aspire to create a future where technology and human experiences seamlessly converge.

We are always looking forward to doing collaboration research; contact us (convert this into a valid email address: julian at u-aizu period ac dot jp) if you are interested. We¨re particularly interested in Master and Doctoral students.

この冩梢をる

麼な广?猟

For a complete list, please check https://onkyo.u-aizu.ac.jp/#/Publications

J. Villegas, K. Akita, and S. Kawahara, ^Psychoacoustic features explain subjective size and shape ratings of pseudo-words, ̄ in Proc. of Forum Acusticum, the 10 Conv. of the European Acoust. Assoc., (Turin, Italy), Sep. 2023.

E. Ly and J. Villegas, ^Cartesian genetic programming parameterization in the context of audio synthesis, ̄ IEEE Signal Process. Letters, vol. 30, pp. 1077C1081, Aug. 2023. DOI: 10.1109/LSP.2023.3304198.

C. Arevalo and J. Villegas, ^Study of auditory trajectories in virtual environments, ̄ in Proc. of Audio Mostly, Aug. 2023. DOI: 10.1145/3616195.3616210.

J. Villegas, S. J. Lee, J. Perkins, and K. Markov, ^Psychoacoustic features explain creakiness classifications made by naive and non-naive listeners, ̄ Speech Comm., vol. 147, pp. 74C81, Jan. 2023. DOI: 10.1016/j.specom.2023.01.006.