Binary Options Magnet | Binary Options Magnet Review

Comprehensive Guide for getting into Home Recording

I'm going to borrow from a few sources and do my best to make this cohesive, but this question comes up a lot. I thought we had a comprehensive guide, but it doesn't appear so. In the absence of this, I feel that a lot of you could use a simple place to go for some basics on recording. There are a couple of great resources online already on some drumming forums, but I don't think they will be around forever.
Some background on myself - I have been drumming a long time. During that time, home recording has gone from using a cassette deck to having a full blown studio at your finger tips. The technology in the last 15 years has gotten so good it really is incredible. When I was trying to decide what I wanted to do with my life, I decided to go to school for audio engineering in a world-class studio. During this time I had access to the studio and was able to assist with engineering on several projects. This was awesome, and I came out with a working knowledge of SIGNAL CHAIN, how audio works in the digital realm, how microphones work, studio design, etc. Can I answer your questions? Yes.

First up: Signal Chain! This is the basic building block of recording. Ever seen a "I have this plugged in but am getting no sound!" thread? Yeah, signal chain.

A "Signal Chain" is the path your audio follows, from sound source, to the recording device, and back out of your monitors (speakers to you normies).
A typical complete signal chain might go something like this:
1] instrument/sound source 2] Microphone/TransducePickup 3] Cable 4] Mic Preamp/DI Box 5] Analog-to-Digital Converter 6] Digital transmission medium[digital data get recoded for usb or FW transfer] 7] Digital recording Device 8] DSP and Digital summing/playback engine 9] Digital-to-Analog Converter 10] Analog output stage[line outputs and output gain/volume control] 11] Monitors/Playback device[headphones/other transducers]
Important Terms, Definitions, and explanations (this will be where the "core" information is):
1] AD Conversion: the process by which the electrical signal is "converted" to a stream of digital code[binary, 1 and 0]. This is accomplished, basically, by taking digital pictures of the audio...and this is known as the "sampling rate/frequency" The number of "pictures" determines the frequency. So the CD standard of 44.1k is 44,100 "pictures" per second of digital code that represents the electrical "wave" of audio. It should be noted that in order to reproduce a frequency accuratly, the sampling rate must be TWICE that of the desired frequency (See: Nyquist-Shannon Theorem). So, a 44.1 digital audio device can, in fact, only record frequencies as high as 22.05khz, and in the real world, the actual upper frequency limit is lower, because the AD device employs a LOW-PASS filter to protect the circuitry from distortion and digital errors called "ALIASING." Confused yet? Don't worry, there's more... We haven't even talked about Bit depth! There are 2 settings for recording digitally: Sample Rate and Bit Depth. Sample rate, as stated above, determines the frequencies captured, however bit depth is used to get a better picture of the sample. Higher bit depth = more accurate sound wave representation. More on this here. Generally speaking, I record at 92KHz/24 bit depth. This makes huge files, but gets really accurate audio. Why does it make huge files? Well, if you are sampling 92,000 times per second, you are taking each sample and applying 24 bits to that, multiply it out and you get 92,000*24 = 2,208,000 bits per second or roughly 0.26MB per second for ONE TRACK. If that track is 5 minutes long, that is a file that is 78.96MB in size. Now lets say you used 8 inputs on an interface, that is, in total, 631.7MB of data. Wow, that escalates quick, right? There is something else to note as well here: Your CPU has to calculate this. So the amount of calculations it needs to perform for this same scenario is ~17.7 million calculations PER SECOND. This is why CPU speed and RAM is super important when recording digitally.
2] DA conversion: the process by which the digital code (the computer representation of a sound wave) is transformed back into electrcal energy in the proper shape. In a oversimplified explanation, the code is measured and the output of the convertor reflects the value of the code by changing voltage. Think of a sound wave on a grid: Frequency would represent the X axis (the horizontal axis)... but there is a vertical axis too. This is called AMPLITUDE or how much energy the wave is generating. People refer to this as how 'loud' a sound is, but that's not entirely correct. You can have a high amplitude wave that is played at a quiet volume. It's important to distinguish the two. How loud a sound is can be controlled by the volume on a speaker or transducer. But that has no impact on how much amplitude the sound wave has in the digital space or "in the wire" on its way to the transducer. So don't get hung up on how "loud" a waveform is, it is how much amplitude it has when talking about it "in the box" or before it gets to the speakeheadphone/whatever.
3] Cables: An often overlooked expense and tool, cables can in fact, make or break your recording. The multitudes of types of cable are determined by the connector, the gauge(thickness), shielding, type of conductor, etc... Just some bullet points on cables:
- Always get the highest quality cabling you can afford. Low quality cables often employ shielding that doesnt efectively protect against AC hums(60 cycle hum), RF interference (causing your cable to act as a gigantic AM/CB radio antenna), or grounding noise introduced by other components in your system. - The way cables are coiled and treated can determine their lifespan and effectiveness. A kinked cable can mean a broken shield, again, causing noise problems. - The standard in the USA for wiring an XLR(standard microphone) cable is: PIN 1= Cold/-, PIN 2= Hot/+, PIN 3=Ground/shield. Pin 3 carries phantom power, so it is important that the shield of your cables be intact and in good condition if you want to use your mic cables without any problems. - Cables for LINE LEVEL and HI-Z(instrument level) gear are not the same! - Line Level Gear, weather professional or consumer, should generally be used with balanced cables (on a 1/4" connector, it will have 3 sections and is commonly known as TRS -or- TipRingSleeve). A balanced 1/4" is essentially the same as a microphone cable, and in fact, most Professional gear with balanced line inputs and outputs will have XLR connectors instead of 1/4" connectors. - Hi-Z cable for instruments (guitars, basses, keyboards, or anything with a pickup) is UNBALANCED, and should be so. The introduction of a balanced cable can cause electricity to be sent backwards into a guitar and shock the guitar player. You may want this to happen, but your gear doesn't. There is some danger here as well, especially on stage, where the voltage CAN BE LETHAL. When running a guitabass/keyboard "Direct" into your interface, soundcard, or recording device, you should ALWAYS use a "DIRECT BOX", which uses a transformer to isolate and balance the the signal or you can use any input on the interface designated as a "Instrument" or "Hi-Z" input. It also changes some electrical properties, resulting in a LINE LEVEL output (it amplifies it from instrument level to line level).
4] Digital Data Transmissions: This includes S/PDIF, AES/EBU, ADAT, MADI. I'm gonna give a brief overview of this stuff, since its unlikely that alot of you will ever really have to think about it: - SDPIF= Sony Phillips Digital Interface Format. using RCA or TOSLINK connectors, this is a digital protocol that carries 3 streams of information. Digital audio Left, Digital Audio Right, and CLOCK. SPDIF generally supports 48khz/20bit information, though some modern devices can support up to 24bits, and up to 88.2khz. SPDIF is the consumer format of AES/EBU - AES/EBU= Audio Engineering Society/European Breadcasters Union Digital protocol uses a special type of cable often terminated with XLR connectors to transmit 2 channels of Digital Audio. AES/EBU is found mostly on expensive professional digital gear. - ADAT= the Alesis Digital Audio Tape was introduced in 1991, and was the first casette based system capable of recording 8 channels of digital audio onto a single cartridge(a SUPER-VHS tape, same one used by high quality VCR's). Enough of the history, its not so important because we are talking about ADAT-LIGHTPIPE Protocol, which is a digital transmission protocol that uses fiberoptic cable and devices to send up to 8 channels of digital audio simultaneously and in sync. ADAT-Lightpipe supports up to 48khz sample rates. This is how people expand the number of inputs by chaining interfaces. - MADI is something you will almost never encounter. It is a protocol that allows up to 64 channels of digital audio to be transmitted over a single cable that is terminated by BNC connectors. Im just telling you it exists so in case you ever encounter a digital snake that doesnt use Gigabit Ethernet, you will know whats going on.
digital transmission specs: SPDIF -> clock->2Ch->RCA cable(consumer) ADAT-Lightpipe->clock->8Ch->Toslink(semi-pro) SPDIF-OPTICAL->clock->2Ch->Toslink(consumer) AES/EBU->clock->2Ch->XLR(Pro) TDIF->clock->8Ch->DSub(Semi-Pro) ______________ MADI->no clock->64Ch->BNC{rare except in large scale pofessional apps} SDIF-II->no clock->24Ch->DSub{rare!} AES/EBU-13->no clock->24Ch->DSub
5] MICROPHONES: There are many types of microphones, and several names for each type. The type of microphone doesn't equate to the polar pattern of the microphone. There are a few common polar patterns in microphones, but there are also several more that are less common. These are the main types- Omni-Directional, Figure 8 (bi-directional), Cardioid, Super Cardioid, Hyper Cardioid, Shotgun. Some light reading.... Now for the types of microphones: - Dynamic Microphones utilize polarized magnets to convert acoustical energy into electrical energy. there are 2 types of dynamic microphones: 1) Moving Coil microphones are the most common type of microphone made. They are also durable, and capable of handling VERY HIGH SPL (sound pressure levels). 2) Ribbon microphones are rare except in professional recording studios. Ribbon microphones are also incredibly fragile. NEVER EVER USE PHANTOM POWER WITH A RIBBON MICROPHONE, IT WILL DIE (unless it specifically requires it, but I've only ever seen this on one Ribbon microphone ever). Sometimes it might even smoke or shoot out a few sparks; applying phantom power to a Ribbon Microphone will literally cause the ribbon, which is normally made from Aluminum, to MELT. Also, windblasts and plosives can rip the ribbon, so these microphones are not suitible for things like horns, woodwinds, vocals, kick drums, or anything that "pushes air." There have been some advances in Ribbon microphones and they are getting to be more common, but they are still super fragile and you have to READ THE MANUAL CAREFULLY to avoid a $1k+ mistake. - CondenseCapacitor Microphones use an electrostatic charge to convert acoustical energy into electrical energy. The movement of the diaphragm(often metal coated mylar) toward a ceramic "backplate" causes a fluctuation in the charge, which is then amplified inside the microphone and output as an electrical signal. Condenser microphones usually use phantom power to charge the capacitors' and backplate in order to maintain the electrostatic charge. There are several types of condenser microphones: 1) Tube Condenser Microphones: historically, this type of microphone has been used in studios since the 1940s, and has been refined and redesigned hundreds, if not thousands of times. Some of the "best sounding" and most desired microphones EVER MADE are Tube Condenser microphones from the 50's and 60's. These vintage microphones, in good condition, with the original TUBES can sell for hundreds of thousands of dollars. Tube mics are known for sounding "full", "warm", and having a particular character, depending on the exact microphone. No 2 tubes mics, even of the same model, will sound the same. Similar, but not the same. Tube mics have their own power supplies, which are not interchangeable to different models. Each tube mic is a different design, and therefore, has different power requirements. 2) FET Condenser microphones: FET stands for "Field Effect Transistor" and the technology allowed condenser microphones to be miniturized. Take for example, the SHURE beta98s/d, which is a minicondenser microphone. FET technology is generally more transparant than tube technology, but can sometimes sound "harsh" or "sterile". 3) Electret Condenser Microphones are a condenser microphone that has a permanent charge, and therefore, does not require phantom power; however, the charge is not truly permanent, and these mics often use AA or 9V batteries, either inside the mic, or on a beltpack. These are less common.
Other important things to know about microphones:
- Pads, Rolloffs, etc: Some mics have switches or rotating collars that notate certain things. Most commonly, high pass filters/lowcut filters, or attenuation pads. 1) A HP/LC Filter does exactly what you might think: Removes low frequency content from the signal at a set frequency and slope. Some microphones allow you to switch the rolloff frequency. Common rolloff frequencies are 75hz, 80hz, 100hz, 120hz, 125hz, and 250hz. 2) A pad in this example is a switch that lowers the output of the microphone directly after the capsule to prevent overloading the input of a microphone preamplifier. You might be asking: How is that possible? Some microphones put out a VERY HIGH SIGNAL LEVEL, sometimes about line level(-10/+4dbu), mic level is generally accepted to start at -75dbu and continues increasing until it becomes line level in voltage. It should be noted that linel level signals are normally of a different impedance than mic level signals, which is determined by the gear. An example for this would be: I mic the top of a snare drum with a large diaphragm condenser mic (solid state mic, not tube) that is capable of handling very high SPLs (sound pressure levels). When the snare drum is played, the input of the mic preamp clips (distorts), even with the gain turned all the way down. To combat this, I would use a pad with enough attenuation to lower the signal into the proper range of input (-60db to -40 db). In general, it is accepted to use a pad with only as much attentuation as you need, plus a small margin of error for extra “headroom”. What this means is that if you use a 20db pad where you only need a 10db pad, you will then have to add an additional 10db of gain to achieve a desireable signal level. This can cause problems, as not all pads sound good, or even transparent, and can color and affect your signal in sometimes unwanted ways that are best left unamplified. - Other mic tips/info: 1) when recording vocals, you should always use a popfilter. A pop filter mounted on a gooseneck is generally more effective than a windscreen made of foam that slips over the microphone. The foam type often kill the highfrequency response, alter the polar pattern, and can introduce non-linear polarity problems(part of the frequency spectrum will be out of phase.) If you don't have a pop filter or don't want to spend on one, buy or obtain a hoop of some kind, buy some cheap panty-hose and stretch it over the hoop to build your own pop filter. 2) Terms Related to mics: - Plosives: “B”, “D”, “F”, “G”, “J”, “P”, “T” hard consonants and other vocal sounds that cause windblasts. These are responsible for a low frequency pop that can severly distort the diaphragm of the microphone, or cause a strange inconsistency of tonality by causing a short term proximity effect.
- Proximity effect: An exponential increase in low frequency response causes by having a microphone excessivly close to a sound. This can be cause by either the force of the air moving actually causes the microphone’s diaphragm to move and sometimes distort, usually on vocalists or buy the buildup of low frequency soundwaves due to off-axis cancellation ports. You cannot get proximity effect on an omnidirectional microphone. With some practice, you can use proximity effect to your advantage, or as an effect. For example, if you are recording someone whispering and it sounds thin or weak and irritating due to the intenese high mid and high frequency content, get the person very close to a cardioid microphone with two popfilters, back to back approx 1/2”-1” away from the mic and set your gain carefully, and you can achieve a very intimite recording of whispering. In a different scenario, you can place a mic inside of a kick drum between 1”-3” away from the inner shell, angled up and at the point of impact, and towards the floor tom. This usually captures a huge low end, and the sympathetic vibration of the floor tom on the kick drum hits, but retains a clarity of attack without being distorted by the SPL of the drum and without capturing unplesant low-mid resonation of the kick drum head and shell that is common directly in the middle of the shell.
6) Wave Envelope: The envelope is the graphical representation of a sound wave commonly found in a DAW. There are 4 parts to this: Attack, Decay, Sustain, Release: 1) Attack is how quickly the sound reaches its peak amplitude; 2) Decay is the time it takes to reach the sustain level; 3) Sustain how long a sound remains at a certain level (think of striking a tom, the initial smack is attack, then it decays to the resonance of the tom, how long it resonates is the sustain); 4) Release is the amount of time before the sustain stops. This is particularly important as these are also the settings on a common piece of gear called a Compressor! Understanding the envelope of a sound is key to learning how to maniuplate it.
7) Phase Cancellation: This is one of the most important concepts in home recording, especially when looking at drums. I'm putting it in this section because it matters so much. Phase Cancellation is what occurs when the same frequencies occur at different times. To put it simply, frequency amplitudes are additive - meaning if you have 2 sound waves of the same frequency, one amplitude is +4 and the other is +2, the way we percieve sound is that the frequency is +6. But a sound wave has a positive and negative amplitude as it travels (like a wave in the ocean with a peak and a swell). If the frequency then has two sources and it is 180 degrees out of phase, that means one wave is at +4 while the other is at -4. This sums to 0, or cancels out the wave. Effectively, you would hear silence. This is why micing techniques are so important, but we'll get into that later. I wanted this term at the top, and will likely mention it again.

Next we can look at the different types of options to actually record your sound!

1) Handheld/All in one/Field Recorders: I don't know if portable cassette tape recorders are still around, but that's an example of one. These are (or used to) be very popular with journalists because they were pretty decent at capturing speech. They do not fare too well with music though. Not too long ago, we saw the emergence of the digital field recorder. These are really nifty little devices. They come in many shapes, sizes and colors, and can be very affordable. They run on batteries, and have built-in microphones, and record digitally onto SD cards or harddiscs. The more simple ones have a pair of built-in condenser microphones, which may or may not be adjustable, and record onto an SD-card. They start around $99 (or less if you don't mind buying refurbished). You turn it on, record, connect the device itself or the SD card to your computer, transfer the file(s) and there is your recording! An entry-level example is the Tascam DR-05. It costs $99. It has two built in omni-directional mics, comes with a 2GB microSD card and runs on two AA batteries. It can record in different formats, the highest being 24-bit 96KHz Broadcast WAV, which is higher than DVD quality! You can also choose to record as an MP3 (32-320kbps) if you need to save space on the SD card or if you're simply going to record a speech/conference or upload it on the web later on. It's got a headphone jack and even small built-in speakers. It can be mounted onto a tripod. And it's about the size of a cell phone. The next step up (although there are of course many options that are price and feature-wise inbetween this one and the last) is a beefier device like the Zoom H4n. It's got all the same features as the Tascam DR-05 and more! It has two adjustable built-in cardioid condenser mics in an XY configuration (you can adjust the angle from a 90-120 degree spread). On the bottom of the device, there are two XLR inputs with preamps. With those, you can expand your recording possibilities with two external microphones. The preamps can send phantom power, so you can even use very nice studio mics. All 4 channels will be recorded independantly, so you can pop them onto your computer later and mix them with software. This device can also act as a USB interface, so instead of just using it as a field recorder, you can connect it directly to your computer or to a DSLR camera for HD filming. My new recommendation for this category is actually the Yamaha EAD10. It really is the best all-in-one solution for anyone that wants to record their kit audio with a great sound. It sports a kick drum trigger (mounts to the rim of the kick) with an x-y pattern set of microphones to pick up the rest of the kit sound. It also has on-board effects, lots of software integration options and smart features through its app. It really is a great solution for anyone who wants to record without reading this guide.
The TL;DR of this guide is - if it seems like too much, buy the Yamaha EAD10 as a simple but effective recording solution for your kit.

2) USB Microphones: There are actually mics that you an plug in directly to your computer via USB. The mics themselves are their own audio interfaces. These mics come in many shapes and sizes, and offer affordable solutions for basic home recording. You can record using a DAW or even something simple like the stock windows sound recorder program that's in the acessories folder of my Windows operating system. The Blue Snowflake is very affordable at $59. It can stand alone or you can attach it to your laptop or your flat screen monitor. It can record up to 44.1kHz, 16-bit WAV audio, which is CD quality. It's a condenser mic with a directional cardioid pickup pattern and has a full frequency response - from 35Hz-20kHz. It probably won't blow you away, but it's a big departure from your average built-in laptop, webcam, headset or desktop microphone. The Audio Technica AT2020 USB is a USB version of their popular AT2020 condenser microphone. At $100 it costs a little more than the regular version. The AT2020 is one of the finest mics in its price range. It's got a very clear sound and it can handle loud volumes. Other companies like Shure and Samson also offer USB versions of some of their studio mics. The AT2020 USB also records up to CD-quality audio and comes with a little desktop tripod. The MXL USB.009 mic is an all-out USB microphone. It features a 1 inch large-diaphragm condenser capsule and can record up to 24-bit 96kHz WAV audio. You can plug your headphones right into the mic (remember, it is its own audio interface) so you can monitor your recordings with no latency, as opposed to doing so with your computer. Switches on the mic control the gain and can blend the mic channel with playback audio. Cost: $399. If you already have a mic, or you don't want to be stuck with just a USB mic, you can purcase a USB converter for your existing microphone. Here is a great review of four of them.
3) Audio Recording Interfaces: You've done some reading up on this stuff... now you are lost. Welcome to the wide, wide world of Audio Interfaces. These come in all different shapes and sizes, features, sampling rates, bit depths, inputs, outputs, you name it. Welcome to the ocean, let's try to help you find land.
- An audio interface, as far as your computer is concerned, is an external sound card. It has audio inputs, such as a microphone preamp and outputs which connect to other audio devices or to headphones or speakers. The modern day recording "rig" is based around a computer, and to get the sound onto your computer, an interface is necessary. All computers have a sound card of some sort, but these have very low quality A/D Converters (analog to digital) and were not designed with any kind of sophisticated audio recording in mind, so for us they are useless and a dedicated audio interface must come into play.
- There are hundreds of interfaces out there. Most commonly they connect to a computer via USB or Firewire. There are also PCI and PCI Express-based interfaces for desktop computers. The most simple interfaces can record one channel via USB, while others can record up to 30 via firewire! All of the connection types into the computer have their advantages and drawbacks. The chances are, you are looking at USB, Firewire, or Thunderbolt. As far as speeds, most interfaces are in the same realm as far as speed is concerned but thunderbolt is a faster data transfer rate. There are some differences in terms of CPU load. Conflict handling (when packages collide) is handled differently. USB sends conflict resolution to the CPU, Firewire handles it internally, Thunderbolt, from what I could find, sends it to the CPU as well. For most applications, none of them are going to be superior from a home-recording standpoint. When you get up to 16/24 channels in/out simultaneously, it's going to matter a lot more.
- There are a number of things to consider when choosing an audio interface. First off your budget, number of channels you'd like to be able to record simultaneously, your monitoring system, your computer and operating system and your applications. Regarding budget, you have to get real. $500 is not going to get you a rig with the ability to multi-track a drum set covered in mics. Not even close! You might get an interface with 8 channels for that much, but you have to factor in the cost of everything, including mics, cables, stands, monitors/headphones, software, etc... Considerations: Stereo Recording or Multi-Track Recording? Stereo Recording is recording two tracks: A left and right channel, which reflects most audio playback systems. This doesn't necessarily mean you are simply recording with two mics, it means that what your rig is recording onto your computer is a single stereo track. You could be recording a 5-piece band with 16 mics/channels, but if you're recording in stereo, all you're getting is a summation of those 16 tracks. This means that in your recording software, you won't be able to manipulate any of those channels independantly after you recorded them. If the rack tom mic wasn't turned up loud enough, or you want to mute the guitars, you can't do that, because all you have is a stereo track of everything. It's up to you to get your levels and balance and tone right before you hit record. If you are only using two mics or lines, then you will have individual control over each mic/line after recording. Commonly, you can find 2 input interfaces and use a sub-mixer taking the left/right outputs and pluging those into each channel of the interface. Some mixers will output a stereo pair into a computer as an interface, such as the Allen&Heath ZED16. If you want full control over every single input, you need to multi-track. Each mic or line that you are recording with will get it's own track in your DAW software, which you can edit and process after the fact. This gives you a lot of control over a recording, and opens up many mixing options, and also many more issues. Interfaces that facilitate multitracking include Presonus FireStudio, Focusrite Scarlett interfaces, etc. There are some mixers that are also interfaces, such as the Presonus StudioLive 16, but these are very expensive. There are core-card interfaces as well, these will plug in directly to your motherboard via PCI or PCI-Express slots. Protools HD is a core-card interface and requires more hardware than just the card to work. I would recommend steering clear of these until you have a firm grasp of signal chain and digital audio, as there are more affordable solutions that will yield similar results in a home-environment.

DAW - Digital Audio Workstation

I've talked a lot about theory, hardware, signal chain, etc... but we need a way to interpret this data. First off what does a DAW do? Some refer to them as DAE's (Digital Audio Editors). You could call it a virtual mixing board , however that isn't entirely correct. DAWs allow you to record, control, mix and manipulate independant audio signals. You can change their volume, add effects, splice and dice tracks, combine recorded audio with MIDI-generated audio, record MIDI tracks and much much more. In the old days, when studios were based around large consoles, the actual audio needed to be recorded onto some kind of medium - analog tape. The audio signals passed through the boards, and were printed onto the tape, and the tape decks were used to play back the audio, and any cutting, overdubbing etc. had to be done physically on the tape. With a DAW, your audio is converted into 1's and 0's through the converters on your interface when you record, and so computers and their harddiscs have largely taken the place of reel-to-reel machines and analog tape.
Here is a list of commonly used DAWs in alphabetical order: ACID Pro Apple Logic Cakewalk SONAR Digital Performer FL (Fruity Loops) Studio (only versions 8 and higher can actually record Audio I believe) GarageBand PreSonus Studio One Pro Tools REAPER Propellerhead Reason (version 6 has combined Reason and Record into one software, so it now is a full audio DAW. Earlier versions of Reason are MIDI based and don't record audio) Propellerhead Record (see above) Steinberg Cubase Steinberg Nuendo
There are of course many more, but these are the main contenders. [Note that not all DAWs actually have audio recording capabilities (All the ones I listed do, because this thread is about audio recording), because many of them are designed for applications like MIDI composing, looping, etc. Some are relatively new, others have been around for a while, and have undergone many updates and transformations. Most have different versions, that cater to different types of recording communities, such as home recording/consumer or professional.
That's a whole lot of choices. You have to do a lot of research to understand what each one offers, what limitations they may have etc... Logic, Garageband and Digital Performer for instance are Mac-only. ACID Pro, FL Studio and SONAR will only run on Windows machines. Garageband is free and is even pre-installed on every Mac computer. Most other DAWs cost something.
Reaper is a standout. A non-commercial license only costs $60. Other DAWs often come bundled with interfaces, such as ProTools MP with M-Audio interfaces, Steinberg Cubase LE with Lexicon Interfaces, Studio One with Presonus Interfaces etc. Reaper is a full function, professional, affordable DAW with a tremendous community behind it. It's my recommendation for everyone, and comes with a free trial. It is universally compatible and not hardware-bound.
You of course don't have to purchase a bundle. Your research might yield that a particular interface will suit your needs well, but the software that the same company offers or even bundles isn't that hot. As a consumer you have a plethora of software and hardware manufacturers competing for your business and there is no shortage of choice. One thing to think about though is compatability and customer support. With some exceptions, technically you can run most DAWs with most interfaces. But again, don't just assume this, do your research! Also, some DAWs will run smoother on certain interfaces, and might experience problems on others. It's not a bad thing to assume that if you purchase the software and hardware from the same company, they're at least somewhat optimized for eachother. In fact, ProTools, until recently would only run on Digidesign (now AVID) and M-Audio interfaces. While many folks didn't like being limited to their hardware choices to run ProTools, a lot of users didn't mind, because I think that at least in part it made ProTools run smoother for everyone, and if you did have a problem, you only had to call up one company. There are many documented cases where consumers with software and hardware from different companies get the runaround:
Software Company X: "It's a hardware issue, call Hardware Company Z". Hardware Company Z: "It's a software issue, call Software Company X".
Another thing to research is the different versions of softwares. Many of them have different versions at different pricepoints, such as entry-level or student versions all the way up to versions catering to the pros. Cheaper versions come with limitations, whether it be a maximum number of audio tracks you can run simultaneously, plug-ins available or supported Plug-In formats and lack of other features that the upper versions have. Some Pro versions might require you to run certain kinds of hardware. I don't have time nor the will to do research on individual DAW's, so if any of you want to make a comparison of different versions of a specific DAW, be my guest! In the end, like I keep stressing - we each have to do our own research.
A big thing about the DAW that it is important to note is this: Your signal chain is your DAW. It is the digital representation of that chain and it is important to understand it in order to properly use that DAW. It is how you route the signal from one spot to another, how you move it through a sidechain compressor or bus the drums into the main fader. It is a digital representation of a large-format recording console, and if you don't understand how the signal gets from the sound source to your monitor (speaker), you're going to have a bad time.

Playback - Monitors are not just for looking at!

I've mentioned monitors several times and wanted to touch on these quickly: Monitors are whatever you are using to listen to the sound. These can be headphones, powered speakers, unpowered speakers, etc. The key thing here is that they are accurate. You want a good depth of field, you want as wide a frequency response as you can get, and you want NEARFIELD monitors. Unless you are working with a space that can put the monitor 8' away from you, 6" is really the biggest speaker size you need. At that point, nearfield monitors will reproduce the audio frequency range faithfully for you. There are many options here, closed back headphones, open back headphones, studio monitors powered, and unpowered (require a separate poweramp to drive the monitor). For headphones, I recommend AKG K271, K872, Sennheiser HD280 Pro, etc. There are many options, but if mixing on headphones I recommend spending some good money on a set. For Powered Monitors, there's really only one choice I recommend: Kali Audio LP-6 monitors. They are, dollar for dollar, the best monitors you can buy for a home studio, period. These things contend with Genelecs and cost a quarter of the price. Yes, they still cost a bit, but if you're going to invest, invest wisely. I don't recommend unpowered monitors, as if you skimp on the poweramp they lose all the advantages you gain with monitors. Just get the powered monitors if you are opting for not headphones.

Drum Mic'ing Guide, I'm not going to re-create the wheel.


That's all for now, this has taken some time to put together (a couple hourse now). I can answer other questions as they pop up. I used a few sources for the information, most notably some well-put together sections on the Pearl Drummers Forum in the recording section. I know a couple of the users are no longer active there, but if you see this and think "Hey, he ripped me off!", you're right, and thanks for allowing me to rip you off!

A couple other tips that I've come across for home recording:
You need to manage your gain/levels when recording. Digital is NOT analog! What does this mean? You should be PEAKING (the loudest the signal gets) around -12dB to -15dB on your meters. Any hotter than that and you are overdriving your digital signal processors.
What sound level should my master bus be at for Youtube?
Bass Traps 101
Sound Proofing 101
submitted by M3lllvar to drums [link] [comments]

Differences between LISP 1.5 and Common Lisp, Part 2a

Here is the first part of the second part (I ran out of characters again...) of a series of posts documenting the many differences between LISP 1.5 and Common Lisp. The preceding post can be found here.
In this part we're going to look at LISP 1.5's library of functions.
Of the 146 symbols described in The LISP 1.5 Programmer's Manual, sixty-two have the same names as standard symbols in Common Lisp. These symbols are enumerated here.
The symbols t and nil have been discussed already. The remaining symbols are operators. We can divide them into groups based on how semantics (and syntax) differ between LISP 1.5 and Common Lisp:
  1. Operators that have the same name but have quite different meanings
  2. Operators that have been extended in Common Lisp (e.g. to accept a variable number of arguments), but that otherwise have similar enough meanings
  3. Operators that have remained effectively the same
The third group is the smallest. Some functions differ only in that they have a larger domain in Common Lisp than in LISP 1.5; for example, the length function works on sequences instead of lists only. Such functions are pointed out below. All the items in this list should, given the same input, behave identically in Common Lisp and LISP 1.5. They all also have the same arity.
These are somewhat exceptional items on this list. In LISP 1.5, car and cdr could be used on any object; for atoms, the result was undefined, but there was a result. In Common Lisp, applying car and cdr to anything that is not a cons is an error. Common Lisp does specify that taking the car or cdr of nil results in nil, which was not a feature of LISP 1.5 (it comes from Interlisp).
Common Lisp's equal technically compares more things than the LISP 1.5 function, but of course Common Lisp has many more kinds of things to compare. For lists, symbols, and numbers, Common Lisp's equal is effectively the same as LISP 1.5's equal.
In Common Lisp, expt can return a complex number. LISP 1.5 does not support complex numbers (as a first class type).
As mentioned above, Common Lisp extends length to work on sequences. LISP 1.5's length works only on lists.
It's kind of a technicality that this one makes the list. In terms of functionality, you probably won't have to modify uses of return---in the situations in which it was used in LISP 1.5, it worked the same as it would in Common Lisp. But Common Lisp's definition of return is really hiding a huge difference between the two languages discussed under prog below.
As with length, this function operates on sequences and not only lists.
In Common Lisp, this function is deprecated.
LISP 1.5 defined setq in terms of set, whereas Common Lisp makes setq the primitive operator.
Of the remaining thirty-three, seven are operators that behave differently from the operators of the same name in Common Lisp:
  • apply, eval
The connection between apply and eval has been discussed already. Besides setq and prog or special or common, function parameters were the only way to bind variables in LISP 1.5 (the idea of a value cell was introduced by Maclisp); the manual describes apply as "The part of the interpreter that binds variables" (p. 17).
  • compile
In Common Lisp the compile function takes one or two arguments and returns three values. In LISP 1.5 compile takes only a single argument, a list of function names to compile, and returns that argument. The LISP 1.5 compiler would automatically print a listing of the generated assembly code, in the format understood by the Lisp Assembly Program or LAP. Another difference is that compile in LISP 1.5 would immediately install the compiled definitions in memory (and store a pointer to the routine under the subr or fsubr indicators of the compiled functions).
  • count, uncount
These have nothing to do withss Common Lisp's count. Instead of counting the number of items in a collection satisfying a certain property, count is an interface to the "cons counter". Here's what the manual says about it (p. 34):
The cons counter is a useful device for breaking out of program loops. It automatically causes a trap when a certain number of conses have been performed.
The counter is turned on by executing count[n], where n is an integer. If n conses are performed before the counter is turned off, a trap will occur and an error diagnostic will be given. The counter is turned off by uncount[NIL]. The counter is turned on and reset each time count[n] is executed. The counter can be turned on so as to continue counting from the state it was in when last turned off by executing count[NIL].
This counting mechanism has no real counterpart in Common Lisp.
  • error
In Common Lisp, error is part of the condition system, and accepts a variable number of arguments. In LISP 1.5, it has a single, optional argument, and of course LISP 1.5 had no condition system. It had errorset, which we'll discuss later. In LISP 1.5, executing error would cause an error diagnostic and print its argument if given. While this is fairly similar to Common Lisp's error, I'm putting it in this section since the error handling capabilities of LISP 1.5 are very limited compared to those of Common Lisp (consider that this was one of the only ways to signal an error). Uses of error in LISP 1.5 won't necessarily run in Common Lisp, since LISP 1.5's error accepted any object as an argument, while Common Lisp's error needs designators for a simple-error condition. An easy conversion is to change (error x) into (error "~A" x).
  • map
This function is quite different from Common Lisp's map. The incompatibility is mentioned in Common Lisp: The Language:
In MacLisp, Lisp Machine Lisp, Interlisp, and indeed even Lisp 1.5, the function map has always meant a non-value-returning version. However, standard computer science literature, including in particular the recent wave of papers on "functional programming," have come to use map to mean what in the past Lisp implementations have called mapcar. To simplify things henceforth, Common Lisp follows current usage, and what was formerly called map is named mapl in Common Lisp.
But even mapl isn't the same as map in LISP 1.5, since mapl returns the list it was given and LISP 1.5's map returns nil. Actually there is another, even larger incompatibility that isn't mentioned: The order of the arguments is different. The first argument of LISP 1.5's map was the list to be mapped and the second argument was the function to map over it. (The order was changed in Maclisp, likely because of the extension of the mapping functions to multiple lists.) You can't just change all uses of map to mapl because of this difference. You could define a function like map-1.5,such as
(defun map-1.5 (list function) (mapl function list) nil) 
and replace map with map-1.5 (or just shadow the name map).
  • function
This operator has been discussed earlier in this post.
Common Lisp doesn't need anything like LISP 1.5's function. However, mostly by coincidence, it will tolerate it in many cases; in particular, it works with lambda expressions and with references to global function definitions.
  • search
This function isn't really anything like Common Lisp's search. Here is how it is defined in the manual (p. 63, converted from m-expressions into Common Lisp syntax):
(defun search (x p f u) (cond ((null x) (funcall u x)) ((p x) (funcall f x)) (t (search (cdr x) p f u)))) 
Somewhat confusingly, the manual says that it searches "for an element that has the property p"; one might expect the second branch to test (get x p).
The function is kind of reminiscent of the testr function, used to exemplify LISP 1.5's indefinite scoping in the previous part.
  • special, unspecial
LISP 1.5's special variables are pretty similar to Common Lisp's special variables—but only because all of LISP 1.5's variables are pretty similar to Common Lisp's special variables. The difference between regular LISP 1.5 variables and special variables is that symbols declared special (using this special special special operator) have a value on their property list under the indicator special, which is used by the compiler when no binding exists in the current environment. The interpreter knew nothing of special variables; thus they could be used only in compiled functions. Well, they could be used in any function, but the interpreter wouldn't find the special value. (It appears that this is where the tradition of Lisp dialects having different semantics when compiled versus when interpreted began; eventually Common Lisp would put an end to the confusion.)
You can generally change special into defvar and get away fine. However there isn't a counterpart to unspecial. See also common.
Now come the operators that are essentially the same in LISP 1.5 and in Common Lisp, but have some minor differences.
  • append
The LISP 1.5 function takes only two arguments, while Common Lisp allows any number.
  • cond
In Common Lisp, when no test in a cond form is true, the result of the whole form is nil. In LISP 1.5, an error was signaled, unless the cond was contained within a prog, in which case it would quietly do nothing. Note that the cond must be at the "top level" inside the prog; cond forms at any deeper level will error if no condition holds.
  • gensym
The LISP 1.5 gensym function takes no arguments, while the Common Lisp function does.
  • get
Common Lisp's get takes three arguments, the last of which is a value to return if the symbol does not have the indicator on its property list; in LISP 1.5 get has no such third argument.
  • go
In LISP 1.5 go was allowed in only two contexts: (1) at the top level of a prog; (2) within a cond form at the top level of a prog. Later dialects would loosen this restriction, leading to much more complicated control structures. While progs in LISP 1.5 were somewhat limited, it is at least fairly easy to tell what's going on (e.g. loop conditions). Note that return does not appear to be limited in this way.
  • intern
In Common Lisp, intern can take a second argument specifying in what package the symbol is to be interned, but LISP 1.5 does not have packages. Additionally, the required argument to intern is a string in Common Lisp; LISP 1.5 doesn't really have strings, and so intern instead wants a pointer to a list of full words (of packed BCD characters; the print names of symbols were stored in this way).
  • list
In Common Lisp, list can take any number of arguments, including zero, but in LISP 1.5 it seems that it must be given at least one argument.
  • load
In LISP 1.5, load can't be given a filespec as an argument, for many reason. Actually, it can't be given anything as an argument; its purpose is simply to hand control over to the loader. The loader "expects octal correction cards, 704 row binary cards, and a transfer card." If you have the source code that would be compiled into the material to be loaded, then you can just put it in another file and use Common Lisp's load to load it in. But if you don't have the source code, then you're out of luck.
  • mapcon, maplist
The differences between Common Lisp and LISP 1.5 regarding these functions are similar to those for map given above. Both of these functions returned nil in LISP 1.5, and they took the list to be mapped as their first argument and the function to map as their second argument. A major incompatibility to note is that maplist in LISP 1.5 did what mapcar in Common Lisp does; Common Lisp's maplist is different.
  • member
In LISP 1.5, member takes none of the fancy keyword arguments that Common Lisp's member does, and returns only a truth value, not the tail of the list.
  • nconc
In LISP 1.5, this function took only two arguments; in Common Lisp, it takes any number.
  • prin1, print, terpri
In Common Lisp, these functions take an optional argument specifying an output stream to which they will send their output, but in LISP 1.5 prin1 and print take just one argument, and terpri takes no arguments.
  • prog
In LISP 1.5, the list of program variables was just that: a list of variables. No initial values could be provided as they can in Common Lisp; all the program variables started out bound to nil. Note that the program variables are just like any other variables in LISP 1.5 and have indefinite scope.
In the late '70s and early '80s, the maintainers of Maclisp and Lisp Machine Lisp wanted to add "naming" abilities to prog. You could say something like
(prog outer () ... (prog () (return ... outer))) 
and the return would jump not just out of the inner prog, but also out of the outer one. However, they ran into a problem with integrating a named prog with parts of the language that were based on prog. For example, they could add a special case to dotimes to handle an atomic first argument, since regular dotimes forms had a list as their first argument. But Maclisp's do had two forms: the older (introduced in 1969) form
(do atom initial step-form end-test body...) 
and the newer form, which was identical to Common Lisp's do. The older form was equivalent to
(do ((atom intitial step-form)) (end-test) body...) 
Since the older form was still supported, they couldn't add a special case for an atomic first argument because that was the normal case of the older kind of do. They ended up not adding named prog, owing to these kinds of difficulties.
However, during the discussion of how to make named prog work, Kent Pitman sent a message that contained the following text:
I now present my feelings on this issue of how DO/PROG could be done in order this haggling, part of which I think comes out of the fact that these return tags are tied up in PROG-ness and so on ... Suppose you had the following primitives in Lisp: (PROG-BODY ...) which evaluated all non-atomic stuff. Atoms were GO-tags. Returns () if you fall off the end. RETURN does not work from this form. (PROG-RETURN-POINT form name) name is not evaluated. Form is evaluated and if a RETURN-FROM specifying name (or just a RETURN) were executed, control would pass to here. Returns the value of form if form returns normally or the value returned from it if a RETURN or RETURN-FROM is executed. [Note: this is not a [*]CATCH because it is lexical in nature and optimized out by the compiler. Also, a distinction between NAMED-PROG-RETURN-POINT and UNNAMED-PROG-RETURN-POINT might be desirable – extrapolate for yourself how this would change things – I'll just present the basic idea here.] (ITERATE bindings test form1 form2 ...) like DO is now but doesn't allow return or goto. All forms are evaluated. GO does not work to get to any form in the iteration body. So then we could just say that the definitions for PROG and DO might be (ignore for now old-DO's – they could, of course, be worked in if people really wanted them but they have nothing to do with this argument) ... (PROG [  ]  . ) => (PROG-RETURN-POINT (LET  (PROG-BODY . )) [  ]) (DO [  ]   . ) => (PROG-RETURN-POINT (ITERATE   (PROG-BODY . )) [  ]) Other interesting combinations could be formed by those interested in them. If these lower-level primitives were made available to the user, he needn't feel tied to one of PROG/DO – he can assemble an operator with the functionality he really wants. 
Two years later, Pitman would join the team developing the Common Lisp language. For a little while, incorporating named prog was discussed, which eventually led to the splitting of prog in quite a similar way to Pitman's proposal. Now prog is a macro, simply combining the three primitive operators let, block, and tagbody. The concept of the tagbody primitive in its current form appears to have been introduced in this message, which is a writeup by David Moon of an idea due to Alan Bawden. In the message he says
The name could be GO-BODY, meaning a body with GOs and tags in it, or PROG-BODY, meaning just the inside part of a PROG, or WITH-GO, meaning something inside of which GO may be used. I don't care; suggestions anyone?
Guy Steele, in his proposed evaluator for Common Lisp, called the primitive tagbody, which stuck. It is a little bit more logical than go-body, since go is just an operator and allowed anywhere in Common Lisp; the only special thing about tagbody is that atoms in its body are treated as tags.
  • prog2
In LISP 1.5, prog2 was really just a function that took two arguments and returned the result of the evaluation of the second one. The purpose of it was to avoid having to write (prog () ...) everywhere when all you want to do is call two functions. In later dialects, progn would be introduced and the "implicit progn" feature would remove the need for prog2 used in this way. But prog2 stuck around and was generalized to a special operator that evaluated any number of forms, while holding on to the result of the second one. Programmers developed the (prog2 nil ...) idiom to save the result of the first of several forms; later prog1 was introduced, making the idiom obsolete. Nowadays, prog1 and prog2 are used typically for rather special purposes.
Regardless, in LISP 1.5 prog2 was machine-coded subroutine that was equivalent to the following function definition in Common Lisp:
(defun prog2 (one two) two) 
  • read
The read function in LISP 1.5 did not take any arguments; Common Lisp's read takes four. In LISP 1.5, read read either from "SYSPIT" or from the punched carded reader. It seems that SYSPIT stood for "SYStem Paper (maybe Punched) Input Tape", and that it designated a punched tape reader; alternatively, it might designate a magnetic tape reader, but the manual makes reference to punched cards. But more on input and output later.
  • remprop
The only difference between LISP 1.5's remprop and Common Lisp's remprop is that the value of LISP 1.5's remprop is always nil.
  • setq
In Common Lisp, setq takes an arbitrary even number of arguments, representing pairs of symbols and values to assign to the variables named by the symbols. In LISP 1.5, setq takes only two arguments.
  • sublis
LISP 1.5's sublis and subst do not take the keyword arguments that Common Lisp's sublis and subst take.
  • trace, untrace
In Common Lisp, trace and untrace are operators that take any number of arguments and trace the functions named by them. In LISP 1.5, both trace and untrace take a single argument, which is a list of the functions to trace.

Functions not in Common Lisp

We turn now to the symbols described in the LISP 1.5 Programmer's Manual that don't appear in Common Lisp. Let's get the easiest case out of the way first: Here are all the operators in LISP 1.5 that have a corresponding operator in Common Lisp, with notes about differences in functionality where appropriate.
  • add1, sub1
These functions are the same as Common Lisp's 1+ and 1- in every way, down to the type genericism.
  • conc
This is just Common Lisp's append, or LISP 1.5's append extended to more than two arguments.
  • copy
Common Lisp's copy-list function does the same thing.
  • difference
This corresponds to -, although difference takes only two arguments.
  • divide
This function takes two arguments and is basically a consing version of Common Lisp's floor:
(divide x y) = (values-list (floor x y)) 
  • digit
This function takes a single argument, and is like Common Lisp's digit-char-p except that the radix isn't variable, and it returns a true or false value only (and not the weight of the digit).
  • efface
This function deletes the first appearance of an item from a list. A call like (efface item list) is equivalent to the Common Lisp code (delete item list :count 1).
  • greaterp, lessp
These correspond to Common Lisp's > and <, although greaterp and lessp take only two arguments.
As a historical note, the names greaterp and lessp survived in Maclisp and Lisp Machine Lisp. Both of those languages had also > and <, which were used for the two-argument case; Common Lisp favored genericism and went with > and < only. However, a vestige of the old predicates still remains, in the lexicographic ordering functions: char-lessp, char-greaterp, string-lessp, string-greaterp.
  • minus
This function takes a single argument and returns its negation; it is equivalent to the one-argument case of Common Lisp's -.
  • leftshift
This function is the same as ash in Common Lisp; it takes two arguments, m and n, and returns m×2n. Thus if the second argument is negative, the shift is to the right instead of to the left.
  • liter
This function is identical in essence to Common Lisp's alpha-char-p, though more precisely it's closer to upper-case-p; LISP 1.5 was used on computers that made no provision for lowercase characters.
  • pair
This is equivalent to the normal, two-argument case of Common Lisp's pairlis.
  • plus
This function takes any number of arguments and returns their sum; its Common Lisp counterpart is +.
  • quotient
This function is equivalent to Common Lisp's /, except that quotient takes only two arguments.
  • recip
This function is equivalent to the one-argument case of Common Lisp's /.
  • remainder
This function is equivalent to Common Lisp's rem.
  • times
This function takes any number of arguments and returns their product; its Common Lisp counterpart is *.
Part 2b will be posted in a few hours probably.
submitted by kushcomabemybedtime to lisp [link] [comments]

A complete guide of and debunking of audio on Linux, ALSA and Pulse

Hey fellow penguins,
A few days ago, an user asked about audio quality on Linux, and whether it is worse or better than audio on Windows. The thread became a mess quickly, full of misconceptions and urban myths about Linux. I figured it would be worthwhile to create a complete guide to Linux audio, as well as dispelling some myths and misconceptions.
To all be on the same page, this is going to be thorough, slowly introducing more concepts.

What is sound? How and what can I hear?

You might remember from high school that sound is waves traveling through the air. Vibrations of any kind cause molecules in the air to move. When that wave form finds your ears, it causes little hairs in your ear to move. Different hairs are susceptible to different frequencies, and the signals sent by these hairs are turned into sound you hear by your brain.
In reality it is a little more complicated, but for the sake of this post, that's all you need to know.
The pitch of sound comes from its frequency, the 'shorter' the waves are in a waveform, the higher the sound. The volume of sound comes from how 'tall' the waves are. Human hearing sits in a range between 20Hz and 20,000 Hz, though it deviates per person. Being the egocentric species we are, waves below 20 Hz are called 'infrasound' and waves above 20kHz are called 'ultrasound.' Almost no humans can hear beyond ultrasound, you will find that your hearing probably cuts off at 16kHz.
To play around with this, check out this tone generator, you can prove anything above with this yourself. As a fun fact: human hearing is actually really bad, we've among the most limited frequency ranges. A cat can hear up to 40kHz, and dolphins can even hear up to 160kHz!!
FACT: Playing loud music is dangerous! If you listen to music and you are feeling a discomfort, you should turn the volume down. A true alert is when you hear a beep - this is called tinnitus, and that beep you're hearing is pretty much the death cry of the cells that can hear that frequency. That beep is the last time you will hear that very specific frequency ever again. Please, listening to loud music is not worth the permanent hearing damage, please dial it down for your own sake! <3

How does my computer generate sound?

To listen to sound, you will probably be using headphones or speakers, inside of them are cones that are driven by an electromagnet, causing them to vibrate at very precise frequencies. This is essentially how sound works, though modern headphones certainly can be pretty complex.
To drive that magnet, an audio source will send an analog signal (a waveform) over a wire to the driver, causing it to move at the frequency of that waveform. This is in essence how audio playback works; and we're not going to get into it much deeper than this.
Computers are digital - which is to say, they don't do analog; processors understand ON and OFF, they do not understand 38.689138% OFF and 78.21156% ON. When converting an analog signal (like sound) to a digital one, we make use of a format called PCM. For PCM to be turned into an analog signal, you need a DAC - or as you probably know it: a sound card. DAC stands for 'Digital to Analog Converter', or some people mistakenly call it "Digital Audio ConverteChip"
PCM stands for Pulse-code Modulation, which is a way to represent sampled analog signals in a digital format. We're not going to get into it too much here, but imagine taking a sample of a waveform at regular intervals and storing the value, and then rounding that value to a nearest 'step' (remember this). That's PCM.
The fidelity of PCM comes from two elements, which we are going to discuss next: sampling rate and bit depth.

What is sampling rate? Or: HOW SOUND GOOD?

Sampling rate is the most important part of making PCM sound good. Remember how humans hear in a range of 20Hz to 20kHz? The sample rate of audio has a lot to do with this. You cannot capture high frequencies if you do not capture samples often enough. Since our ears can hear up to 20 kHz, you would imagine that 20kHz would be ideal for capturing audio; however, a result of sampling is that you actually need twice the sample rate, this is called the Nyquist-Sannon sampling theorem, which is a complicated thing. Just understand that to reproduce a 20kHz frequency, you need a sample rate of 40kHz.
To have a little bit of room and leeway, we settled on a sample rate of 48kHz (a multiple of 8) for playback, and 96kHz for recording. We record at this frequency only to make sure absolutely no data is lost. You might be more familiar with 44.1kHz for audio, which is a standard we settled on for CD playback and NTSC. A lot of scientific research has been done on sound quality, and there is no evidence to suggest people can tell the difference between 48kHz or anything higher.
MYTH BUST: Humans cannot hear beyond 20 kHz, period. Anyone who claims to be able to is either supernatural or lying to you - I'll let you choose which.

What is bit-depth? Or: HOW IT MAKE SOUND REALLY NICE?

Remember how I told you to remember that PCM rounds values to the nearest step? This has to do with how binary works. The more bits, the bigger the number you can store. In PCM, the bit-depth decides the number of bits of information in each sample. With 16-bit, the range of values that can be stored is 0 to 65535. Going beyond this is pointless for humans, with no scientific research showing any proven benefit, though marketeers would like you to believe there's benefits.
MYTH BUST: 24-bit depth is often touted as 'high-resolution audio', claiming benefits of a better sonic experience. Such is nothing more than marketing speech, there is no meaningful data 24-bit can capture that 16-bit cannot.

Channels? Or: HOW IT CAN MAKE SOUND IN LEFT BUT NOT RIGHT?

We'll briefly touch on the last part of PCM audio, channels. This is very self explanatory, humans have two ears and can hear separate sounds on both of them, which means we have stereo hearing. As a result, most music is recorded with 2 channels. For some surround settings, you need more channels, this is why you may have heard of 5.1 or 7.1; the first digit is the amount of channels the PCM carries.
For most desktop usage, the only sound we care about is 2-channel PCM.

Recap

So, we've covered all the elements of PCM sound. Let's go over it quickly: sample rate is expressed in Hz and is how often a sample of a waveform is captured, representing the x-axis of a waveform. Bit-depth is the bits of information stored in each sample, and represents the y-axis of the waveform. Channels decide how many simultaneous outputs the PCM can drive separately, since we have 2 ears, you need at least two channels.
As a result, the standard audio playback for both consumers and professionals is 48kHz, 16-bit, 2 channel PCM. This is more than enough to fully represent the full range of human hearing.

How it works in Linux

So, now that we know how PCM works, how does Linux make sound? How can you make Linux sound great? A few important components come into play here, and we'll need to discuss each of them in some detail.

ALSA

ALSA is the interface to the kernel's sound driver. ALSA can take a PCM signal and send it to your hardware by talking to the driver. Something important to know about most DACs is that they can only take one signal at a time, actually. That means that only a single application can send sound to ALSA at once. Long ago, in a darker time, you couldn't watch a movie while listening to music!
This problem was solved a long time ago with the use of alsalib, but doing mixing at a library level isn't a very good solution to the problem. This gave rise to sound servers, of which many have existed. Before PulseAudio, esound was a very popular one but had many problems, eventually it was succeeded by PulseAudio.

PulseAudio

When you think audio on Linux, PulseAudio is probably among the first things you think of. PulseAudio is NOT a driver, nor does it talk to your drivers. Actually, PulseAudio only does two things that we'll discuss in detail later. PulseAudio talks to ALSA, taking control of its single audio stream, and allows other applications to talk to PulseAudio instead. Pulse is an 'audio multiplexer', turning multiple signals into one through a process that is called mixing. Mixing is an incredibly complicated subject that we won't talk about here.
To be able to mix sounds, one must make sure that all the PCM sources are in the same format (the one that's being sent to ALSA); if the PCM format being sent to Pulse does not match the PCM format being sent to ALSA, pulse does a step before mixing it called resampling. Resampling is another very complicated subject that can turn a 8kHz, 4-bit, 1-channel PCM stream into a 24kHz, 24-bit, 2-channel PCM stream.
These two things allow you to play a game, listen to music and watch YouTube, and notifications to produce a sound all at the same time. PulseAudio is the most critical element of the Linux sound stack.
FACT: PulseAudio is a contentious subject, many people have a dislike for this particular bit of software. In all honesty, PulseAudio was brought to the general public in a bit of a premature state, breaking audio for many people. PulseAudio these days is a very stable, solid piece of software. If you have audio issues these days, it's usually a problem in ALSA or your driver.

What about JACK and PipeWire?

PulseAudio isn't the only sound servedaemon available for Linux, though it is certainly the most popular and most likely the default of whatever distribution you are using. PulseAudio has become a bit of a standard for Linux sound and has by far the best compatibility with most applications, but that doesn't mean there aren't alternatives.
JACK (JACK Audio Connection Kit, a recursive acronym like GNU) is a sound server focused primarily on low latency. If you are doing professional audio work on Linux, you will already be very familiar with JACK. JACK's development is very focused on low latency, real-time audio and is critical for such people. JACK is available on most distros as an alternative, and you can try it for yourself if you so want; but you might find some applications do not work nicely with JACK.
PipeWire is a project that is currently in development, looking to solve key problems that exist in current sound servers. PipeWire isn't just a sound server but also handles the multiplexing of video sources (like a camera). Special attention has been put into working with sandboxed applications (like Flatpaks), which is an area where PulseAudio is lacking. PipeWire is a very promising project that might very well succeed PulseAudio in the future and you should expect to see appearing in distribution repositories very soon. You can try it yourself right now, though it isn't quite as easy to get started with as JACK is.
More audio servers exist, but are beyond the scope of this post.

What is resampling?

Resampling is the process of turning a PCM stream into another PCM stream of a different resolution. Your DAC only accepts a limited range of PCM signals, and it is up to the software to make sure the PCM stream is compatible. There is almost no DAC out there that doesn't support 44.1kHz, 16-bit, 2-channel PCM, so this tends to be the default. When you play an audio source (like an OggVorbis file), the PCM stream might be 96kHz, 24-bit, 2-channel PCM.
To fix that, PulseAudio will use a resampling algorithm. There are two kinds of resampling methods: upsampling and downsampling. Upsamling is lossless, since you can always represent less data with more data. Downsampling is lossy by definition, you cannot represent 24-bit PCM with 16-bit PCM.
MYTH: Downsampling is a loss in quality! This is only true in a technical sense, or if you are downsampling to less than 48kHz, 16-bit PCM. When you downsample a 96kHz, 24-bit PCM stream to a 48kHz, 16-bit stream, no meaningful data is lost in the process; because the discarded data lies outside of the human ear's hearing range.
FACT: Resampling is expensive. Good quality resampling algorithms actually take a non-trivial amount of processing power. PulseAudio defaults to a resampling method with a good balance between CPU time used and quality.

What is mixing?

Mixing is the process of taking two PCM streams and combining them into one. This is extremely complicated and not something we're going to discuss at length. It is not important to understand how this works, only to understand that it exists. Without mixing, you wouldn't be able to hear sounds from multiple sources. This is true not just for PulseAudio and computer sound, this is true for anything. In real life, you might use an A/V receiver to accept sound from your TV and music player at once, the receiver then mixes the signals and plays it through your speakers.

What is encoding?

Finally we can talk a little about encoding. Encoding is the process of taking a PCM stream and writing it to a permanent format, two types exist. You have lossy encoding and lossless encoding. Lossy encoding removes data from the PCM stream to safe space. Usually the discarded data is useless to you, and will not make a difference in sound quality; examples of lossy encoding are MP3, AAC and Ogg Vorbis. Lossless encoding takes a PCM stream and encodes it in such a way that no data is lost, examples of lossless encodings are FLAC, ALAC and WAV.
Note that lossy and lossless do not mean compressed and uncompressed. A lossless format can be compressed and usually is, as uncompressed lossless encoding would be very large; it would just be the raw PCM stream. An example of lossless uncompressed audio is WAV.
A new element encodings bring is their bit rate, not to be confused with samplerate and bit depth. Bit rate has to do with how much data is stored in every second of audio. For a lossless, uncompressed PCM stream this is easy to calculate with the formula bit rate = sample rate * bit depth * channels, for 16-bit, 48kHz, 2 channel PCM this is 1,5 Mbit. To get the value in bytes, divide by 8, thus 192kB per second.
The bit rate of an encoder means how much the audio will be compressed. PCM compression is super complicated, but it generally involves discarding silence, cutting off frequencies you cannot hear, and so forth. Radio encoding has a bit rate of roughly 128 Kbps, while most CDs have a bit rate of 1360kbps.
Lastly, there is the concept of VBR and CBR. VBR stands for Variable Bit Rate, which CBR stands for Constant Bit Rate. In a VBR encoding, the encoder aim for a target bit rate that you set, but it can deviate if it thinks it needs more or less. CBR will encode a constant bit rate, and will never deviate.
MYTH: Lossless sounds better than lossy. This is blatantly untrue, lossless audio formats were created for perservation and archival reasons. When you encode a lossy file from a lossless source, and you make sure that it's a 48kHz, 16-bit PCM encoding, you will not lose any important information. What is enough depends on the quality of the encoder. For OggVorbis, 192kbps is sufficient, for MP3, 256kbps should be preferred. 320kbps is excessive and the highest quality supported by MP3. In general, 256kbps does the trick, but with storage being abundant these days, you can play it safe and use 320kbps if it makes you feel better.
MYTH: CBR is better than VBR. There is no reason not to use VBR at all, there is no point in writing 256Kbps of data if there is only silence or a constant tone. Let your encoder do what it does best!
FACT: Encoding a lossy format to another lossy format will result in a loss of data! You will compress data that is already compressed, which is really bad. When encoding to a lossy format, always use a high quality recording in a lossless format as the source!
I DON'T BELIEVE YOU: This article from the guys Xiph (the people who brought you FLAC and Ogg Vorbis) explain it better than I can: https://people.xiph.org/%7Exiphmont/demo/neil-young.html

TL;DR, I JUST WANT THE BEST SOUND QUALITY

Here is a quick guide to achieving great sound quality on Linux with the above in mind.
As you can see, there's little you can do in Linux in the first place, so what can you do if you want better sound?
MYTH: Linux sound quality is worse than Windows. They are exactly the same, Pulse doesn't work that different from how Windows does mixing and resampling.
MYTH: Linux sound quality can be better than Windows. They are exactly the same. All improvements in quality come from the driver and your DAC, not the sound server. Pulse and ALSA do not touch the PCM beyond moving it around and resampling it.
I hope this (long) guide was of help to you, and helped to dispell some myths. Did I miss anything? Ask or let me know, and I'll answer the best I can. Did I make any factual errors? Please correct me with a source and I'll amend the post immediately.
submitted by _Spell_ to linux [link] [comments]

Heritage (4)

First Chapter
Previous Chapter
The view of Sanctuary was made even more impressive as An’Ra and his team waited in the V-Lift. Through the window, they can see the ornate streets curving through resplendent pools underneath, dotted by the occasional fountain.
“I hate this.” Vora groaned, dressed in a soldier’s standard battle uniform. “Why are we here, Commander?”
“We were investigating genocide and possible use of bioweapons,” Sonak explained, “Even without the first part, Strain Y is going to scare a lot of people. I think it’s reasonable for the Council to take a personal interest in this. Besides, I think the real issue here is the fact you might actually have to speak to the Council.”
“But...ugh, fine. Yes, I wasn’t mentally prepared for it when An’Ra came along and went, Party’s over, ass to the Council, now.”
“Hey now.” An’Ra feigned offense, “I didn’t say it that way, did I?”
“Kind of close, Commander.” Sonak chuckled.
“But still, I think that this isn’t about keeping the galaxy safe.” Vora sighed. “I think the Council’s keeping an eye open for any opportunity to to convince the galaxy they’re still in charge.”
“Or maybe they genuinely want to make sure that we’re not at risk of dying a horrible death by watching our own bodies melt.” Sonak shrugged. “Strain Y doesn’t care if you’re an officer or infantry.”
“That assumes the Council cares about what’s going on outside of these walls.” Vora glanced over, wariness in her look.
“Either way, we’re going to get our answer. Eyes open.” An’Ra said as the V-Lift doors parted ways, revealing the same ornate architecture within. Trees and grasses stole the eye as they walked through the hallways, various government officials from the myriad races conversing and conducting whatever business they were doing. After walking up some steps, they arrived at the large double-doors that lead to the Council Chambers. Standing on each side were the guards constantly on watch for any potential attack. Both of them Anaran, as expected. On approach, the guards opened up the doors to allow An’Ra and his team in.
When they entered, the room was probably more magnificent than they expected. A grand, curved window dominated the view. An unintrusive look into the beautiful splendor of Sanctuary. Directly in front of An’Ra and his team was a pathway that led to a semi-circular desk, standing in front of the raised platform that the Council sat, who had just now noticed the arrivals and are settling themselves in.
And it was there An’Ra got a good look at the Council. Four of them, half Esti, half Huak. An’Ra secretly never liked the Esti, the way he could see menacing fangs when their flat mouths opened, or those flaps of scale that expands outward into a hood. It just unnerved him, a reason he could never really find out. As soon as he sensed that they were ready, he walked up to the desk, wearing his officer’s dress uniform, comprised of a fine, smooth fabric shirt, adorned with a fluffy sash that went from his right shoulder down to his left side, shoulder pads accented with shining studs and finished with awards placed on his top-left chest, awards hard earned back in the Great War.
“Commander An’Ra.” The Huak councilor on the far right side, Neual, began, thick fingers interlaced together as he rested his hands on the desk. “Thank you for agreeing to this unusual request, we are very appreciative.”
“It’s no trouble, Councilor.” An’Ra gave a slight bow. “How can I help?”
“We’ll start at the beginning.” The first Esti councilor, Zhur, stated, holding up a secure datapad to ensure the information is easily accessible. “Strain Y. Your report says that while there is confirmation it was used, it was not used in significant quantities. Can you elaborate on that for us?”
“Previous uses of Strain Y all had one thing in common,” An’Ra began, “The amount deployed saturated the atmosphere of the planets they were used on. This is because, despite its lethality, is not actually that infectious. In order to guarantee the total elimination of a planet’s population, you will need to deploy it in such large numbers that everyone will be infected within minutes of deployment. In this case, for Planet 3, there simply wasn’t enough to reach that threshold.”
“At which you go on to state that thermal weapons were used in a state of panic,” Yhiz, the second Esti councilor, added, “Can you explain your reasoning for us?”
“As established before, Strain Y was used on the planet. My working theory is that, when they discovered that they grossly underestimated the amount needed, they panicked and used thermal weapons to both try and burn out the supplies used and finish the genocide they started.”
“But if thermal weapons were indeed used, how did you confirm Strain Y was deployed?” Zhur spoke up.
“We found pieces of Strain Y’s genetic material on the planet’s surface.” An’Ra glanced over to Zhur’s direction. “And as I arrived back in the system, I received a quantum packet from the expedition, stating that they have confirmed that Strain Y was indeed used. Adding that with the obvious use of thermal weaponry, I concluded that the attackers didn’t use enough of the weapon to guarantee extinction.”
Zhur leaned back in her seat, scarlet eyes fixated on the desk. An’Ra couldn’t tell if she was trying to find a counter argument or just processing the information.
“Have you found any evidence that can tell us if there’s more of the strain out in the galaxy?” Neual asked after giving a sigh through his wide nostrils.
“I’m afraid not, sir. All I can definitively say is that this planet fell victim to a biological Cruel Weapon.”
“I’m more concerned about the native life.” Ghala, the final and second Huak councilor, stated after being silent. “Are you absolutely certain that none of the planet’s indigenous life survived?”
“The scientific team said that there’s a very low chance of that.” An’Ra’s ears flattened. “And after seeing the surface myself, I must agree. I don’t think we should wait for a miracle.”
“Ah...I see.” Ghala leaned back in his chair, obviously disheartened. “Even if the planet is now incapable of supporting life, we still wish to move forward with a more symbolic gesture and statement by declaring Planet 3 of System AQ 115-4A illegal for colonization.”
“But let’s move onto what I believe is the most pressing issue: the identity of the attackers.” Neual leaned forward. “Based on your report, you and the team have found nothing that neither confirms nor clears any potential suspect?”
“That’s correct, Councilor.” An’Ra nodded. “We’ve found nothing, within the system and on the planet itself, that tells us anything about who did it.”
“Are there any surviving infrastructure on the planet?” Ghala asked, straightening his posture. “Even if there isn’t much, maybe the natives’ equipment has something we can use?”
“As established before, the planet was devastated terribly. There are indeed ruins of their civilization, but whether or not we can salvage anything from them is a different story.” An’Ra answered with a sigh.
“So in that case, the Qu’Rathi are still the likely aggressors then.” Zhur stated.
“I’m not convinced.” An’Ra shook his head. “Everything we have so far is just circumstantial, nothing solid.”
“Yes, that proves they did it. But looking at it from a different perspective, nothing that proves they didn’t do it either.” Zhur countered, her eyes squinting some.
“I don’t think it’s a good idea to press forward with what I think you’re planning, Councilor.” An’Ra leaned forward on the table, ears flattening back. “If you do, and we uncover evidence that clearly proves their innocence, you will be pushing an innocent race away.”
“But if we uncover evidence that proves their guilt, then the trial will be much more expedient.” Yhiz joined in, his eyes also squinting slightly.
“With respect Council, I still think that’s the worst decision you can make.” An’Ra’s teeth began to bare as he spoke. “We can’t make any decision until we acquire more evidence.”
“Nothing we have proves that Strain Y is permanently removed as a future threat.” Zhur started, “Nothing we have proves that the Federation did not do anything. Right now, we have the threat of a Class 4 Cruel Weapon looming over everyone’s heads. People will start becoming scared, start wondering if their shadows will melt them at any time.”
“I know that Councilors!” An’Ra raised his voice. “Give me time! I’m not saying this is over yet, just let me keep looking!”
“We aren’t stopping your investigation, Commander.” Neual said, holding his hand up slightly. “We’re just informing you that you may not have the time you thought you had.”
“What does that mean?” An’Ra’s ears stuck out at an angle, mixed between stiffening and anger.
The councilors looked at each other for a few moments before Zhur stood up and took in a deep breath. “Commander, based on both the collected evidence so far, and lack of any other evidence, the Council has decided to proceed with charging the Qu’Rathi Federation on counts of Genocide, possession of a Cruel Weapon, and deployment of Cruel Weapons with intent for malicious harm. Out of respect for your efforts, Commander, we will give you eight months to continue your investigation. Beyond that, we will close your investigation to allow the courts time to process and review what has been collected.”
“Are you insane?!” An’Ra shouted. “Do you even realize what would happen if you’re wrong?!”
“We do, Commander.” Zhur nodded. “But the risk is just too high. The safety of the galaxy and justice for the inhabitants of System AQ 115-4A must be our top priority. This debrief is over.”
An’Ra stood in complete and stunned silence, watching the Council casually get up from their seats and dispersing to their own private offices. It wasn’t until that they have fully left the chambers that An’Ra finally found the will to move and regroup with Sonak and Vora, both of whom are also equally stunned.
“Those ekas!” Vora exclaimed. “It’s bad enough to be quick at accusing someone, but how dare they claim this is for those humans!”
“And here I thought all those things the news were saying was just to get people to watch them.” Sonak muttered softly. “Commander, obviously this is bad.”
“I know, Sonak.” An’Ra crossed his arms, ears now pointing straight back and teeth fully bared. “We can’t let them do this.”
“But what can we do?” Sonak exclaimed. “What options do we have?”
“Alliance Enforcement!” Vora declared. “Commander, what if you filed a complaint to the Lord-Enforcer? Tell him what’s going on?”
“That’s a good idea actually.” Sonak nodded. “If we convince the Lord-Enforcer that the Council is being too hasty with our investigation, which shouldn’t be hard, he just might deny the Council’s request for prosecution!”
“I can’t imagine the Lord-Enforcer approving this even without our complaint.” An’Ra replied. “Still, never hurts to be prepared. Come on, let’s get to it.”

Jur’El leaned back in the puffy seat he was assigned to. The restaurant he entered had a calm and relaxed atmosphere. The lighting was dimmed, which complimented the dark but cozy ambiance of the room. The walls and floor each had a dark-themed color scheme, the seats were of a different scheme but not too different to oppose the goal set by the designer. And although the building was packed with customers, their conversations did not threaten to turn anyone deaf. It was a quiet and relaxed experience, something he needed desperately.
Even now, as hard as he tried to focus on how delicious his food was, how balanced the flavor and texture of it was, he was still forced to relive what happened on Planet 3. He could hear the sudden screams of his colony group. The scientists who were first awoken that wanted to find out why their Life world was so different to the data they were given. To the families and menial workers who were just talking amongst themselves and organizing the supplies when those machines stormed the ship. And what still terrifies him, still sends his heart racing, was when that one machine entered the control room, blood drenching its chassis. Bits and pieces of Qu’Rathi innards on its cold mechanical manipulators. How it just stared at him, lifelessly, with a rifle aiming right at his chest. And those drills. Those ghenning drills.
He was forced out of his torment by the rough poking of his shoulder. When he looked, it was another Qu’Rathi. “Captain Jur’El, right?”
“Uh..yes, who are you?” He nodded in confusion.
“Jhen.” She introduced herself, quickly taking a seat opposite from him. “I need to talk to you.”
“About what?”
“The expedition to that system deep in the Dead Zone.” She glared at him, mandibles tense. “The same system who’s Life world had a native population, the very same world being investigated as a genocide site, where your expedition went to settle.”
“Jhen, please, we had no idea what was going on.” Jur’El leaned back, hands raised in a defensive posture. “All we were told was that this was the most pristine and beautiful Life world ever discovered in a system rich with stellar bodies.”
“I don’t care about that. What I care is how you seem to be the only one who came back.” Jhen started raising herself from her seat. “I’m pretty sure that anyone who attempts to colonize a freshly cleansed world is forcibly removed from that planet and returned to their respective people. So where is everyone?”
Jur’El’s eyes went wide. He knew exactly where this was going. “I...I can’t tell you.”
“Don’t you dare.” Jhen snarled, now leaning over the table. “I’ve heard enough of that from the company, I’m not here to be force-fed more of it!”
“Just...trust me,” Jur’El spoke softly, shakily leaving his seat, “You don’t want to know.”
“Don’t you ghenning walk away from me!” Jhen shouted, grabbing Jur’El’s shoulder firmly, the other patrons now locking eyes to the two. “Two of my sons were on that mission! What happened to them?!”
Jur’El clutched his head with a hand firmly, feeling tears exploding out of his eyes. His mind rushing back to those scenes. The sounds, the smell, the fear. Everything crashed into him all at once. And they’re not just memories now. They’re all coming back to him as if he was transported in time and placed back to the exact moment it started. Back to the moment where he was screaming for his wife and son to hide, to find a corner of the ship that was hard to see and to stay there until the shooting stopped. How he felt his heart give out when he heard them beg for their life when they were found, cut short by the merciless cracks of their alien weapons. How every possible feeling melted away when the clanking of the machine’s walking approached him, when he realized there was no nowhere in the control room to hide, not with how thorough those things were being. The frantic, mindless begging he got into when he saw the blood covered machine hold that weapon to him.
“You’re safe!” A voice rang out. It wasn’t much, but it was enough for him to come back. That scene melting away back into the restaurant. All those smells and sights to be gone. When he was certain that it was over, he looked around. There was Jhen, face beaten and currently being restrained by a blue-furred Anaran. And in front of him was another, gray-furred one. “You hear me? You’re safe now!”
“I...wh-what happened?”
“We saw what was going on. The Qu’Rathi over there? She was just screaming down your throat, all while you were just on the floor. Ken’A there nearly caved her face in by the time we got some distance between you two.”
“Th...thank you.” Jur’El muttered, shakily getting himself back on his feet with the help of the gray Anaran. Jur’El was just about to walk away when the Anaran firmly, but not threateningly, gripped his shoulder.
“I know the signs, friend.” He began softly. “Your soul is badly wounded and is bleeding heavily. Just like a doctor if you’re shot or cut, you need to find someone to talk to, get your soul back together.”
“As long as I don’t run into another person like her, I’ll be fine.” Jur’El countered, trying to walk away still.
“No, you won’t.” The Anaran still held his grip. “I need you to trust me. With how bad your soul is right now, doing anything other than talking to someone will just make it worse. And when your soul dies, well...believe me, it’s not a good experience, for anybody.”
Jur’El stared into the gray Anaran’s orange eyes for a moment before he let out a sigh. “You’re not going to give up, are you?”
“I’ve seen what happens too many times. Good Battle-Brothers, completely different people. Either they’re just shadows of themselves, or doomed to forever relive their horrors. If I have the chance to prevent it happening again, I’m giving it my all.”
Jur’El looked aside for a few moments, internally fighting himself as to whether he should comply or keep resisting. He finally reached his decision when he became certain that the Anaran would most likely hunt him down as a life mission if he didn’t seek therapy. “Fine, I’ll do it. Got anyone in mind?”
“A dear friend of mine. He’ll get you back on track, promise.” The Anaran patted Jur’El’s shoulder a few times before proceeding to lead him, motioning for Ken’A to let go of Jhen and follow.

Michael, accompanied by his newly founded Praetorian Guard, continued his leisurely stroll down the surprisingly spacious corridor. The hallway itself was typical. All-metal construction with evenly spaced rows of blue-white lights.
The Praetorian Guard themselves are comprised of those Servants who display both extreme scores in combat efficiency and effectiveness in defensive situations. Armed with the absolute best in magnetic-ballistics, the most impenetrable of armor designs and the highest optimized combat-frames, even a squad of these guards can hold off a virtual army, provided they aren’t subjected to bombardment or heavy ordinance.
Just as Michael was about to enter the main command center of the station he was touring, Central contacted him on a private channel.
“Master? Your new administration is ready.” He declared proudly.
“Alright, let’s begin the introductions.” Michael replied, signaling the guardsmen that he’s about to enter a meeting. Although unneeded, the Guard promptly took up a defensive formation around him. He assumes this is mostly to keep unwelcome guests from interrupting him.
The scenery of the tranquil design of the corridor melted away into the virtual world built by neon-blue blocks, the same visual that he witnessed when he first received the interface. After a few moments, several other Servants materialized and stood attention in a semi-circle in front of him.
“My Lord.” The first Servant bowed, its voice deep, if gruff. “I’m Supreme Commander Schwarzkopf, in charge of managing our armed forces and overseeing the grand strategy of the Imperium.”
“I am Secretary Elizabeth.” The second spoke with a calming, soothing feminine voice. “I’m responsible for ensuring our economy runs perfectly. In short, I make sure every project gets the hammers and resources it needs.”
“I’m Foreign Minister Edward, at your service m’Lord.” The third, with a distinct British accent and of a composed, controlled voice. “While regretfully I’m useless at this stage, the moment we initiate contact with xeno species, I’ll handle diplomatic affairs and achieving our goals through negotiations when possible.”
“No offense, but I thought every Servant wants to see aliens dead?” Michael spoke up with slight confusion.
“Oh, of course. The very idea of ripping out the entrails of a xeno and suffocating them with it brings such joy it’s therapeutic.” Benjamin replied. Michael was unsure if he was joking or not. “I was appointed because I displayed the most effective ability at hiding such feelings.”
“Ah...good to know.” Michael nodded dryly, not exactly assured. “Back to where we were?”
“Yes, Lord. I’m Director Mansfield.” The fourth spoke with an eloquent-sounding voice. “I’m in charge of Imperial Intelligence, running operations abroad and managing counter-intelligence on the homefront. I give you my word that we will know everything about the aliens and they will know nothing about us.”
“And that leaves me, Master.” Central began. “As a result of this delegation, I now possess more processing cycles towards research and development. That means that I’ll be in charge of ensuring Imperial dominance in technology. I will also act as your adjutant, filtering out information that does not need your attention.”
“Well...shit, this sounds like an actual government I’m in charge of.” Michael gave out a nervous chuckle. “All the more reason to get down to business though. Let’s start with the first matter. Schwarzkopf, how’s our military coming along?”
“It’s growing rapidly, your majesty.” He answered with distinct pride. “Already we have several hundred frigates, fifty light cruisers and twenty heavy cruisers, with the first wave of battleships due to exit the drydocks within a few days. Additionally, we have established four different army groups with fifty divisions each.”
“I thought we’d take a lot longer.” Michael stated with no hidden amazement.
“There’s great benefit in our workforce able to operate at a hundred percent every hour of the day.” Elizabeth commented, her emotion-flags also indicating pride. “And speaking of which, our population of Servants grows geometrically. That benefits both our economy and the military. Our economy by providing more workers in skilled and unskilled labor, and the military by providing more crew members and soldiers.”
“So in short, it won’t be long before we become a virtual powerhouse.” Michael said, arms crossed.
“Especially if we continue expanding.” Elizabeth nodded. “On that note, we have already claimed several dozen more systems.”
“With Rigel and Betelgeuse selected as naval bases.” Schwarzkopf chimed in.
“So we’re expanding in all the ways, got it.” Michael nodded. “Now the second matter. Terraforming Mars.”
“At present, there are two issues that must be resolved.” Central answered. “The first problem is the planet’s lack of a magnetosphere. Without that, any and all organic life would perish under lethal bombardment of the Sun’s solar wind, in addition to any sustainable atmosphere being lost to space. The second problem is Mars’ inability to retain heat, the cause for it’s known low planetary temperature.”
“And knowing you, you already have possible answers?” Mansfield shrugged.
“Correct. The heat issue is rather trivial to solve. Mars already has an abundant amount of carbon-dioxide within the atmosphere, a well known greenhouse gas. Combined with even more of the gas locked planet side, once temperatures begin to rise, we will set off a snowball effect. However, that is all for naught if the atmosphere is allowed to escape into space by solar wind.”
“So basically the key here is the magnetosphere.” Michael added. “Build that and everything becomes simple.”
“Exactly.” Central affirmed. “Already there are two main methods. One is to build superconducting rings around the planet and drive them with direct current. With enough power, we can generate magnetic fields strong enough to form a virtual magnetosphere.”
“And what’s the second?” Elizabeth said.
“The second is to construct a station at the L1 Lagrange Point that will generate a dipole magnetic field, diverting the solar wind around the planet instead of into it. Although it was simulated using slower, binary processing, the results indicate that Mars would gain half the atmospheric pressure of Earth’s within a few years.”
“So then, the main focus is building that magnetic shield.” Michael spoke firmly. “Elizabeth? Let’s get the ball rolling. Coordinate with Central as needed.”
“At once, my Lord.” Elizabeth bowed.

Unlike the Council chambers, the office of the Lord-Enforcer was much less opulent and more pragmatic. After going through the receptionist area, An’Ra and his team were escorted into the main office itself. However, just like the chambers, a large window dominated the view on entry, granting another view of a city district on Sanctuary.
And sitting in the more rectangular desk was the Lord-Enforcer himself, Dura. Blue eyed, with a fur of dull-orange it reminds of a sunset. As soon as An’Ra and his team walked into the office, the Enforcer sat up, tail wagging.
“Commander An’Ra, in my office!” He exclaimed, arms out to his sides. “Forgive me sir, but I never thought I’d see the day!”
“A pleasure to meet you, sir.” An’Ra replied warmly, greeting the Enforcer with their fists clasped together and pulling themselves inward, shoulder to shoulder.
“Please, no need to be formal with me.” Dura chuckled. “Sit down, what brings you here?”
After taking their respective seats, An’Ra looked at Dura grimly. “I’m here to file a delay on a request for prosecution against the Federation.”
Dura’s ears angled themselves in a mixture of stiffening and lowering. “I just got the paperwork from the Council. And I can tell you that won’t be needed. I’ve already submitted my rejection.”
“With respect, sir.” Sonak spoke up. “I get the feeling that the Council might fight that.”
“Don’t worry, I’m not going to present my back to them just because they ask.” Dura gave off a grin. “I might be some paper-tosser now, but that just means the battlefield is different. Don’t worry Commander, as long as I’m here, you’ll get the chance to finish this investigation properly.”
“Thank you, Enforcer.” An’Ra smiled as he got up from his seat. “With any luck, you won’t have to fight long.”
“Oh, take your time!” Dura replied with an inflection of humor. “This is the most exciting thing I’ve had in years. Was just about to smash my head on this desk any day now actually.”
“Wait, really?” Vora asked, ears stiffened.
“It’s just a joke, Vora.” Sonak assured dryly.
“Oh...” Her ears flattened as the team exited the office.
When they arrived in the main plaza where the Enforcer’s office is located, they congregated in a small collection of benches nearby an ornate fountain that commemorated the Anaran defense of Felaal IV, largely considered the turning point of the Great War, which further enhanced the beauty of the surrounding scenery of floating walkways above crystal-clear waters.
“Well, that’s a relief, hopefully.” An’Ra began, letting out a decompressing sigh.
“I meant what I said earlier, An’Ra.” Sonak said. “If the Council are determined to charge the Federation, which I’m sure they made abundantly clear, they’re not going to let the Enforcer drop mines in their path just like that.”
“Which just means we can’t lose our focus.” Vora replied sternly. “So, what are our options? We can’t exactly go back to Planet 3, there’s really no leads there.”
“What about that Detective we met when we arrived?” Sonak suggested. “He was handling that whistle blower. Maybe that’s something worth looking into?”
“There’s also the Nav-Net.” Vora said. “All we got right now is that the Feds were at that location, but what if we look at the rest of the network? Try and trace their path?”
“The network doesn’t extend into the Dead Zone.” Sonak countered.
“No, not like that. We look at the network across Alliance space. We start with the logs that end at the Dead Zone, and we try to backtrack their route.”
“We’ll need to obtain legal authorization for that, Vora.” An’Ra stated.
“Actually, if I could add something.” Sonak said with his arms crossed. “If the Federation didn’t actually do it, then that questions the credibility of those codes. I think there’s a question that hasn’t been asked yet. And that is, are those codes faked?”
“That’s...a good point actually.” Vora acceded. “If we get the legal permission to examine the NavNet logs, then if the Federation didn’t do it, the logs across the network won’t support it. Think about it. You need a big fleet to do what just happened, and that fleet has to come from somewhere.”
“And that would mean if this was a frame job, they need a way to account for that.” An’Ra continued, confidence flaring. “It’s one thing to trick a single Nav-Buoy, but I really doubt anyone is capable enough of affecting the network itself.”
“We still need the Enforcer’s help to get access to the network.” Sonak reminded.
“Let’s go get it then.” An’Ra stated firmly. With that, the team left their meeting spot and began returning to the Enforcer’s office.
With confidence in their step, the walk back to the office was much shorter compared to before. However, things took a turn when An’Ra and the team noticed a large gathering of officers around the office entrance. They didn’t have to time to wonder when a group exited the office, dragging a combative Dura out with them.
“Commander, this isn’t good.” Sonak growled under his breath.
An’Ra simply stepped forward and grabbed one of the arresting officers. “What in Arenar’s Sword is going on here?”
“Dura’s under arrest on suspicion of corruption.” The officer replied flatly. “Lil’Al has been appointed as acting Lord-Enforcer.”
“The Council’s behind this, Commander!” Dura shouted, his feet literally dragging along the floor as four officers were taking him away. “Don’t believe a word they say about me!”
An’Ra and his team just stood there in stunned silence, watching and hearing the Anaran official being dragged virtually kicking and screaming. By the time they returned to their senses, hushed conversations was populating both the room and outside.
“We’re not going to get in the network, are we?” Sonak asked, still recovering.
“We still have to try, come on.” An’Ra said, already moving. When the team returned to the office, standing next to the desk was a slender Esti. No doubt Lil’al. She was looking out the window when she turned around upon hearing the encroaching footsteps.
“Yes, may I help you?” She began.
“Acting Lord-Enforcer Lil’Al?” An’Ra began, trying the diplomatic route first. “I’m Commander An’Ra, investigating the genocide by use of Strain Y. We’d like to request legal authorization to examine the logs of the Nav-Net.”
“For what purpose?” She replied, taking her seat.
“We believe that it may hold evidence that either confirms or disproves the Federation’s alleged involvement in the attack.”
Lil’Al leaned back in her seat, staring at them. “The Nav-Net is the lifeblood of, well, everything. Commerce, tourism, law enforcement. It holds great information about who has gone where, and in what ship, Commander. You realize that, don’t you?”
“I do, and what you’ve said precisely states how important that is, how important the potential evidence is.”
Lil’Al stayed motionless for a few moments, her long, lithe fingers twiddling about that indicates her thought. “Very well, I’ll start the paperwork to get you authorization, just be mindful of what you’re about to analyze.”
“Thank you.” An’Ra gave a slight bow. “In addition, I’m not sure if it’s been passed along, but Dura has rejected the Council’s request for prosecuting the Federation. Can I assume you’ll uphold that?”
“I’m afraid not, Commander.” Lil’Al replied flatly. “The galaxy has suffered a great loss through the genocide of a race who’ve suffered the universe’s cruel sense of humor by being placed both far away from us and deep within an almost uninhabitable region. I have overturned Dura’s rash decision and accepted the Council’s request.”
“Then I’d like to file a delay on that decision, immediately.” An’Ra replied, ears flattened back.
“On what grounds?”
“Lack of decisive evidence, to start.”
“Same could be said on your side, Commander.” Lil’Al let out a sigh. “Yes, all the evidence collected thus far is not...ideal. However, the most significant points at this time are that a young race who was just about to leave their homeworld was exterminated through the most horrible of all options. We cannot ignore that.”
“But we also can’t rush to conclusions. We need to continue investigating and only go after someone if we have at least one crucial piece of information.” An’Ra countered, arms crossed and his teeth starting to bare.
“And I agree, that’s how it should be done.” Lil’Al replied. “But if we do, we risk dragging out an investigation to such a length we may end up forgetting this tragedy. We cannot allow such an insult to Planet 3’s memory. I’m sorry, but I must reject your petition for judiciary delay.”
Next Chapter
AN: Every single time I paste this in, Reddit is just determined to put it in some code block. Anyways, As of now, I've finally completely locked in the plot for this story, just one major question that could've changed a lot was on my mind for a while. Enjoy!
submitted by SynthoStellar to HFY [link] [comments]

[Megathread] XMG FUSION 15 (with Intel)


On September 6 at IFA, press released their first reports about our collaboration project with Intel: XMG FUSION 15.
Community Links:

Press Links:

Video Links:

The following key facts have already been revealed:
Prices and availability will be announced on September 17. → Countdown to xmg.gg
Teaser Trailer on YouTube: XMG FUSION 15 Laptop | A Design Collaboration with Intel
We look forward to your questions and your feedback!

XMG FUSION 15 - FREQUENTLY ASKED QUESTIONS (FAQ)

This FAQ represents Q&A's over the last few days here. Fellow redditor u/iterateandgit was so kind to help me putting this document together. Big shout out to him please! The FAQ will be further extended over the coming days and weeks. Please keep the questions coming!

Sales, Shipping, Warranty


Q: Are you going to sell this on Amazon in the EU?
A: We are working on getting the product up and running on Amazon. But our own BTO shop at www.bestware.com will always be our primary sales channel and will be the only one where you can customize and configure memory, storage, OS, extend your warranty and pick other options.

Q: Do you offer student discounts or other sales compaigns like black friday?
A: In general, we don't offer student discounts. Sales campaigns are planned just in time, depending on stock level and cannot be announced early. If you want to keep up to date about sales campaigns, please subscribe to our newsletter.

Q: Do you ship to the UK? Can I pay in GBP?
A: We ship to the UK - the pricing will be in EUR, so your bank will do the conversion. Warranty services will be available from UK, shipping to Germany. Currently, in the single markets, these resturn shipments are free for the end-user. In the worst case there might be additional customs fees for shipping.

Q: What warranty options do you offer?
A: All our laptops come with 2 year warranty. Warranty repairs in the first 6 months are promised to be done within 48 hours (+shipping). Both the "instant repair" service and the warranty itself can be extended to up to 3 years.

Q: Do you sell outside of Europe?
A: We are able to ship anywhere, but warranty for customers outside the region would always involve additional customs cost and paperwork for sending the laptop back to Germany in the rare event of an RMA. There is currently no agreement to let other Local OEMs (like Eluktronics in the US) carry the warranty for XMG customers and vice-versa. Some parts are customized (in our case the LCD lid and the keyboard) and it won't be easy to agree on how to share handling fees etc. - so I wouldn't expect a global warranty anytime soon.


Hardware, Specs, Thermals


Q: What is the difference between XMG FUSION 15 and other laptops based on Intel's reference design?
A: The hardware of the barebone will be identical. Other Local OEMs might use different parts for RAM and SSDs. Our branding and service/warranty options might be different. We apply our own set of performance profiles in the Control Center. This will rebalance the differentiation between Silent, Balanced and Enthusiast modes.

Q: What is the TGP of the NVIDIA RTX 2070 Max-Q?
A: Officially, it is 80W in Balanced profile and 90W in Enthusiast profile. You can toggle between these modes in real-time with a dedicated mode switch button. Inofficially, the TGP can go up to 115W in Enthusiast profile thanks to the Overboost mechanic, working in the background. However, those 115W may only be sustained until the system has reached thermal saturation, i.e. when the GPU is approaching the GPU Temperature Target of 75°C.

Q: Can I upgrade the storage and memory after I buy?
A: On storage: The laptop has two m.2 PCI-Express SSD slots. This will give you currently up to 4 TB of SSD storage. There is no 2.5" HDD slot available. Instead, the battery is enlarged to 93.48Wh. You can see pictures of the interior layouts here, here and here.
On memory: the laptop has two SO-DIMM DDR4 memory sockets. You can chose during BTO configuration, if you want to occupy both of them when you order the product. We recommend running the laptop in Dual Channel for high-performance usage.

Q: How easy is to upgrade and repair this laptop?
A: Here are the key facts:
We would give this a solid 8 out of 10 which is pretty high for such a thin&light design. The 2 remaining points are substracted for BGA CPU and GPU, which is unfortunately unavoidable in such a thin design.

Q: Does it support Windows Hello?
A: A Fingerprint-Reader is not available, but the HD webcam comes with Infrared and supports Windows Hello.

Q: Can I get a smaller, lighter charger for this laptop?
A: XMG FUSION 15 requires a 230W power adaptor to provide full performance. If you max out CPU and GPU with furmark and prime, the 230W adapter will be fully utilized.
There are currently two compatible 230W adapters. They have different dimensions but similar weight. Please refer to this comparison table:
XMG FUSION 15 Power Supply Comparsion Table (Google Drive)
Includes shop links. Will be updated with precise weight numbers in the next few days. I also included 120W, 150W and 180W in this table. They all share the same plug (5.5/2.5,, diameter, 12.5mm length). But 120W and 150W are only rated for 19V but the laptop expect 19.5V. Usually this will be compensated by tolerance but we haven't tested how a system would behave under long-term usage with such an adaptor.
In theory, 120W to 180W are enough for charing the laptop and for browsing/web/media. Even a full CPU stress test could easily be handled. But as soon as you use CPU and GPU together, you'll run into the bottleneck and your performance will be reduced.
Comparison pictures:
These 5 pictures show only the relevant 230W chargers.
Again, the weight is about the same.

Q: Is it possible to boot and run the laptop while the lid is kept closed?
A: Closing the lid under load is not recommended because it will limit the airflow and have a bad effect on keyboard and screen. The laptop likes to take air in from the keycaps. With lid closed, the performance might be limited due to reaching temp targets earlier.

Q: Can I get the laptop without the XMG logo? I will be using it in public presentations and I would not like any brand names visible.
A: We cannot ship without XMG logo, but you can use a dbrand skins to cover our logo. We have not yet decided if we want to invest into integrating XMG FUSION 15 into the dbrand shop. But you can already buy 100% compatible skins by using the page of the Eluktronics MAG-15 at dbrand. The chassis dimensions are exactly the same. Please be aware: you have to manually select the option "No Logo Cutout" if you want to buy these skins for your XMG FUSION 15. According to dbrand, there will be most likely no import fees when ordering from the EU as long as the order is below 100€. Check this thread for details.

Q: Will you offer thermal paste upgrades like Thermal Grizzly Kryonaut or Liquid Metal?
A: Our ODMs are using silicon-based, high-performance thermal compund from international manufacturers like Shin-Etsu (Japan) and M.G. (USA). Intel is using MG-860 in this reference design.
These products are used in the industrial sector, so they have no publicly known brand name. Nevertheless, their high thermal conductivity and guaranteed durability provide optimal and long-lasting cooling of your high-performance laptop. The thermal compounds are applied and sealed automatically by the vendor of the thermal components. They are applied in a highly controlled, standardized manner and provide the best balance of thermal performance, production tolerance and product lifetime.
We are considering offering an upgrade to Thermal Grizzly Kryonaut due to popular demand. Will keep you posted on that.

Q: Could you please provide an estimate for how much regular usage (~10 browser tabs + some IDE) battery backup would this have? Will there be any way to trade-off battery backup with performance?
A: Battery life vs. peak performance can be traded off by using the "Silent" performance profile. You can switch between profiles using a dedicated button on the machine. Your scenario (10 tabs + some IDE) sounds like mostly reading and writing. I would estimate to get at least 7 hours of solid battery life in such a scenario, maybe more. We have achieved 8 hours in 1080p Youtube streaming on WiFi with 50% screen brightness. Adblock and NoScript helps to keep your idle browser tabs in check.


I/O Ports, Peripherals


Q: Why are there not more USB-A 3.1 Gen2 or even USB 3.2 Gen2x2 ports?
A: USB-A 3.1 Gen1 is basically the same as USB 3.0. There aren't a lot of USB-A devices that support more than USB 3.0 speed. Faster devices typically use USB-C connectors and can be used on Thunderbolt 3, which is down-compatible to USB-C 3.1 Gen2. One of the USB-A ports actually supports Gen2 speed.
For the following remarks, please keep in mind that I am not an Intel rep, so everything is based on our own experience.
The mainboard design and the I/O port decisions have been made by Intel. Feedback and requests from LOEM customers have been taken into consideration. We would assume that USB 3.2. Gen2x2 (20 Gbit/s) was not considered to be important enough to safe space for 3rd party IC (integrated circuits) on the motherboard. Right now, all the USB ports and Thunderbolts are supplied by Intel's own IC, so they have full control over the hardware, firmware and driver stack and over power saving and performance control. The more IC you add, the higher your Idle power consumption will be, plus adding potential compatibility or speed issues as it often happens with 1st generation 3rd party USB implementations. I very well remember from my own experience the support stories during the first years of USB 3.0, before it was supported in the Intel chipset. On the one hand, Intel is aiming high in terms of performance and convenience, on the other hand: support and reliability still seem to be Intel's goal #1. Thus they seem to play it safe where they deem it to be reasonable.
Intel is gearing up for USB 4.0 and next-gen Thunderbolt. USB 3.2 2x2 is probably treated as little more than a roadmap accident. Peripheral vendors might see it the same way.

Q: Do you support charging over USB-C/Thunderbolt? Does it support docking stations?
A: The Thunderbolt 3 port in Intel's reference design does not support charging. As you probably know, the 100W limit would not be enough to power the whole system and it would make the mainboard more complex to combine two different ways of charging. Intel consciously opted against it and will probably do so again on future high-end gaming/studio models.
The USB-C/Thunderbolt port supports Dual-Link DisplayPort signals, directly connected to the NVIDIA Graphics. This makes proper docking station usage very convenient. The user still needs to connect the external power adaptor. Both ports (Thunderbolt and DC-in) are in the back of the laptop, making the whole setup appear very neat on the desk.

Q: How many PCIe lanes does the Thunderbolt 3 provide? Are they connected to CPU or Chipset?
A: XMG FUSION 15 supports Thunderbolt 3 with 4 lanes of PCIe 3.0. The lanes come from the chipset because all of the CPU lanes (x16) are fully occupied by the dedicated NVIDIA graphics. We are not aware of any side-effects of running Thunderbolt from the chipset. It is common practice for high-end laptops with high-end graphics. The Thunderbolt solution is of course fully validated and certified by Intel's Thunderbolt labs.

Q: Does it have a standby USB to power USB devices without turning on the laptop?
A: Yes, the USB-A port on the left side supports this feature.


LCD Screen


Q: Which LCD panel is being used? Are there plans for 1440p or 4K panels in the laptop? How about PWM flickering?
A: The panel is BOE NV156FHM-N4G. It is currently not known if the panel will change in later batches. This depends on logistics and stock. At any rate, the panel key specs will remain the same. There are currently no plans to offer resolutions above FHD in the current generation of this laptop.
There are very wide ranges on reports of Backlight Brightness PWM control on this panel in different laptops. Ranging from 200Hz to 1000Hz to no PWM at all - all on the same panel model number. Intel informs us that there are many factors (e.g. freq., display driver, BIOS settings implementation, type of dimmers & compatibility with the driver etc.) that impacts the quality of panel dimming performance. To Intel's knowledge, no kind of flickering has been reported during the validation process. Furthermore, first hands-on data from Notebookcheck indicates that no PWM occurs on this panel. With a DSLR test (multiple burst shots at 1/4000s exposure time) I can confirm that there is not a single frame of brightness dipping or black screen, not even at minimum LCD brightness. Hence, we can confirm: BOE NV156FHM-N4G in XMG FUSION 15 (with Intel) does not use PWM for backlight control.

Q: Some BTO shops, for an additional fee, manually pick out display panels with the least back-light bleed. Do you offer that? Even better, do you do that without the extra fee?
A: Intel has validated this design to avoid backlight bleed as much as possible. Currently no plans to do further binning. All dozens of MP samples we have seen so far have been exceptionally good.

Q: I'm coming from a 13" MacBook with Retina display. How am I going to fare with this 15.6" FHD screen in content creation?
A: If you got used to editing high-res visual content (photography, artwork) on your 13inch retina, things will change. On the one hand, your canvas will be larger and more convenient and ergonomic to work with. On the other hand, you will find yourself zooming in more often in order to make out fine-detail. Assuming that you have sharp 20:20 vision.
As it is, the screen resolution and specs are not planned to change within the lifetime of this product. The first realistic time-window for a refresh would be whenever Intel is releasing the next "H" series CPU generation. But even then, an upgrade on resolution will not be guaranteed.
Comparison:
Laptop Resolution Pixel per inch dot pitch
13.3" MacBook Pro Retina (late 2013) 2560x1600 226.98 PPI 0.1119mm
15.6" XMG FUSION 15 (late 2019) 1920x1080 141.21 PPI 0.1799mm
To compare: 141.21 is ~62% from 226.98. This represents the the metric difference in pixel density and peak sharpness between these two models.
If you know the diagonal size and resolution of your screen, you can make this comparison yourself with the DPI/PPI calculator.


Keyboard, Backlight, Switches, Layout


Q: What can you tell us about the mechanical keyboard of XMG FUSION 15?
A: The keyboard has already been reviewed in our XMG NEO series as being more crisp than typical membrane keyboards. Most reviewers attested it a very good score, both for gaming and for writing long texts.
The keyboard backlight can be configured per-key. Default mode is all white.
Keyboard Switch Specs:
Having no frame around the keycaps actually helps the thermals. The fans can pull in additional air from the top. This improves airflow and helps to keep the keyboard temperature at low levels during gaming. It also prevents long-term RMA issues on the keyboard. This specific keyboard switch is already in its 3rd generation and very mature by now.

Q: Is it possible to dampen the mechanical keyboard with o-rings?
A: The switch design does not lend itself to further dampening. The switch mechanic is too complex and has more moving parts than cherry. The 2mm travel distance also plays a role in not allowing more dampening.
For reference, please use this video (Youtube). We compared XMG NEO with another membrane-type keyboard. XMG NEO and FUSION share the same keyboard mechanics with the silent tactile switch and the same sound profile.

Q: Do you have LED keyboard backlight on the secondary key function, like Fn key icons?
A: Please have a look at this picture.
Btw, my working sample has blank keycaps. I took the 3 printed keycaps (F8, F9, F10) from a different sample just to demonstrate the Fn lighting for this picture.
Facts:
In my assesment, the Fn function symbols are clearly visible from the backlight in a dark room. A user should have no difficulty to recognize the icon and reach its function.

Q: Which keyboard layouts do you offer in the EU?
A: The following layouts are available, in alphabetic order: Belgium, Czech, Danish, Dvorak German, Dvorak US, Estonia, French, German, Greek, Italian, Norwegian, Polish for Typists, Portuguese, Russia Latin, Slovakish, Spanish, Swedish / Finnish, Swiss, Turkish, UK, US International (ISO)All these layouts are based on the ISO matrix. See differences between ANSI vs. ISO here.


Operating System


Q: Do you support Linux and dual-boot on XMG FUSION 15?
A: We are in discussion to sell XMG FUSION 15 over Tuxedo with official Linux support. It might take 1 or 2 months to get this running.

Q: Which LAN, Audio and WiFi card vendors will be used? Asking for a friend.
A: From our HWiNFO64 report. (Google Drive link)
LAN: RealTek Semiconductor RTL8168/8111 [PCI\VEN_10EC&DEV_8168&SUBSYS_20868086&REV_15]Audio: Intel(R) Smart Sound Technology (Intel(R) SST) Audio Controller [PCI\VEN_8086&DEV_A348&SUBSYS_20868086&REV_10]WiFi: Intel(R) Wi-Fi 6 AX200 [PCI\VEN_8086&DEV_2723&SUBSYS_00848086&REV_1A], can be replaced.
For more information, please check the linked report file.


Other questions


Q: What would you say are the advantages and differences with other laptops due to the fact the laptop was designed in collaboration with Intel?
A: Disclaimer: I am \not* an Intel rep. The following remarks are based on my personal experience and opinion.*
Advantages:
  1. Very strict quality control on all levels. I can't quote numbers due to NDA, but Intel NUC has extremely low RMA rates, compared to average PC mainboards and systems. Intel is driven by strict internal regulation that strifes for perfection - this applies to the whole chassis, assembly and firmware, not only the mainboard. There are also certain regulations in place, for example in terms of electro-magnetic regulation and skin temperatures. The rating label is littered with regulatory seals from every region of the world, making this laptop especially safe to use.
  2. Access to high-quality material: we have not seen any Gaming Laptops based on Magnesium alloy yet, especially not in the ODM/LOEM ecosystem. The battery cells are also much more dense than what we usually see. Intel has the buying power and the vision to not settle for mediocre parts.
  3. Down-to-earth design: Intel has made this reference design for the ODM/LOEM eco-system. The design does not try to follow any specific corporate identity, thus it does not have any unneccessary "bling bling" like all the others have. Even the Razer Blade with it's sleek shape is quite obnoxious (iny my oppinion) with it's big backlit green snake logo. With XMG FUSION however, we can continue our typical style of "Undercover Gaming".
  4. Security: you can expect stellar support in terms of BIOS and Firmware (TPM, Management Engine) updates whenever any security issues are found. This might also apply to global brands, but ODM/LOEM systems have not always been so quick to react. This is due to the huge fragmentation/customizations in ODM/LOEM systems. Intel however does now allow any fragmentation: every LOEM partner is getting the same firmware. There are many hooks for configurations in this firmware, but the source code / binaries are always the same. This makes support much easier down the line.
Disadvantages:
  1. I can't name many, of course. But I would say the strict validation also makes the partnership less flexible from a product management perspective. There is no plan currently to phase-in any 4K or 300Hz screen (FHD/144Hz ought to be enough for everyone this year) or any Core i9 in this system. Other ODMs might be more open for costly modifications based on low quantities. Intel however has streamlined their production and logistics in a way that gives us (the LOEM) very short lead times and competitive pricing, but will not allow any short-notice upgrades or customizations.

Q: Will there be a 17 inch version?
A: We can neither confirm nor deny plans for a 17 inch version at this point.


[to be continued]
submitted by XMG_gg to XMG_gg [link] [comments]

A Summary of Star Citizen Live "All Things UI"

This is a summary in my own words, based on my own notes, taken whilst watching SCL. I'll mostly be paraphrasing here rather than directly quoting anyone, and occasionally I might add my own comments which are identifiable through the use of Italics within brackets. I've included links below for the YouTube and Twitch VODs respectively.
YouTube: https://www.youtube.com/watch?v=6lSmdJ5UydE Twitch: https://www.twitch.tv/videos/454969871
In this week's episode of SCL, "Simon Bursey and Zane Bien join us [...] to talk all things UI development including vehicle HUDs, kiosks, and more." For reference, Simon is the UI Director, and Zane is now the/a Principle UI Core Tech Developer.
Total: 25 Questions.
Q01) What can you tell us about plans to let users customise their various HUD elements? (e.g. prioritising features, re-sizing elements, changing the colour) [03:28]
TL;DR Customisation can only be developed after they've got the default UI working in a way that's functional, and that they're happy with, otherwise it'd create a lot more work as they try to iterate on the UI's development to get it right. As such, they're more interested in why people want customisation, and whether there are other solutions to those problems. The difficulty in seeing some UIs in certain situations is addressed later.
First they'd ask: why do people want to customise things? Their first thought is that it's because the current UI isn't working for people, so then they'd need to know what people don't like. In relation to this, they're reworking the HUD, Multi-Function Displays, and the FPS visor. Jared posits that one of the reasons people want to be able to change the HUD colour is because it can be difficult to read at times, so being able to change the colour could help with that, but also sometimes people just like different colours, or it could even be related to colour blindness. He continues by saying that it's important to get the basic building blocks of UI done first, before they implement customisation. Zane talks about how they're moving away from having static UI building blocks, to much more flexible ones that should help in solving UI issues. Regarding changing colours to make things easier to read, Zane suggests that there might be a solution to that problem that goes beyond UI (this gets talked about later). Jared reiterates that whilst they're still in development, they want to make the best default standard UI possible, which requires everyone to be using the same thing so that the feedback is unified rather than skewered, because otherwise the feedback would be in regards to specific customisations that would be hard to follow (because there'd be so many different setups). Simon specifies that they are interested in getting feedback about problems people are having with UI right now, and encourages people to share that feedback with them.
Q02) Currently, UI elements like icons of station turrets or mission points can be very invasive, sometimes filling the screen. What can be done to make this more user-friendly/diegetic? [08:38]
It's something that they're interested in looking at, but they're not sure how soon they're going to get to it. What they want is some sort of intelligent system that works based on how far away things are, based on a priority that the designers have somehow set, which then works out which things to show and how many of them to show - i.e. choosing which options from a massive selection are the most important to show. This would be the starting point to addressing the issue.
Q03) As the universe becomes more complex, more entities are vying for space on the HUD: notifications, Points-of-Interest, QT destinations, ships, mission markers, etc. What can be done to manage all of this information? [09:49]
TL;DR They're moving towards a new UI tech system that will allow certain HUD elements to contextually minimise or disappear when they're not needed - a good example of this is the Target Box that often just sits there saying "No Target". On top of this, they want to avoid having information repeated (such as the same info being on the visor HUD as the ship HUD), as well as Players being able to choose what appears on their visor. Jared hints that they'll show off some UI WIPs in Q3 of ISC.
Zane says that when they have easily flexible layouts, they can start thinking more about how to make things smarter regarding when info is displayed. He brings up an example that CR has referred to previously this year: the "Target Box" which often just sits there saying "No Target", saying that if you don't have a target, why don't they just not have the box there at all. Typically that extends across all UIs too, such as the MFDs, where if you don't need to view something at that time, it can disappear out of view and reappear when they're needed, i.e. being more contextual. Simon adds that as they've been developing, they've been adding more and more things, so now they can take a step back and figure out what they want to show and how best to do that. They refer to the Chat box as an example of how it's almost constantly overlapping other UI elements when really it needs its own space on the screen, which Zane says is because the Chat box and the Ship UIs are on two different contexts, so they don't know about each other's sizes. He mentions how they're reworking the MFDs right now, creating a whole new system that's much more systemic, where it has a grid system "where everything kind of fits into each other and you can create different sizes of widgets", and suggests that that's probably how it'd be when they revamp the rest of the UI too. They want to be able to not have information repeated, such as being on the helmet visor as well as being on an MFD, as well as Players then being able to specify what information/HUD element they want on their visor. This requires building the foundation first into the overall design, which is something they're spending a lot of time working on right now as the tech is being developed in parallel. So they're building the tools to help them as developers to make really good UI, and then when that's done they can put the good UI into the game for everyone to see and use it, and hopefully to give feedback on it. Jared hints that the next quarter of ISC will show off some of this UI work that's in-progress.
Q04) Is the messsage-spam that keeps popping up on the HUD a bug or a design feature? [14:39]
TL;DR No; everything's competing for attention due to this problem only appearing after so many things were added into the game separately. Otherwise they need to investigate how to make the messaging system work, in relation to the visor HUD. They have a few ideas on how to do this, such as timers, limits to how many messages can appear, and more direct changes to how Players are notified about new messages.
Simon thinks that it's a legacy feature from having so many things added to the game; that they're all competing for attention. It's something they're planning on investigating - the visor display generally - which is: what do we do to this messaging system to make it work for people. For example, they could give messages priority, so that something really important could override other stuff. They could implement timers so that they stay on for a set time. They could restrict how many messages show within a certain amount of time. Zane adds that they need a smarter system that knows what messages should be displayed there, and suggests that maybe the missions could just be a pulsating icon with a number on it, so that Players can see how many unread new missions there are at a glance. Simon adds that they want to split the mission notifications so that messages are in one place, and that they have a mission objective area showing the current mission related stuff, and somewhere separate perhaps for keyboard shortcut hints that might pop up, essentially trying to avoid having them compete for the same space on the screen.
Q05) What are we doing to support non-1080p resolutions? Many Citizens have 21:9 or wider monitor resolutions. [17:15]
Zane says that part of the issue is that the UI is currently very static. They're developing the ability to have flexible layouts, and those could potentially resize depending upon the aspect ratio of the Player's monitor, so that everything remains visible on the screen. The challenge though is that things are also in-world, such as the MobiGlas, which could be scaled, and also Field of View changes although FoV is something they're still looking into in terms of getting it all to work. Ultimately it comes down to prioritisation, and unfortunately making sure the UI works well with wider monitor resolutions isn't a priority right now.
Q06) Are there any plans to add keyboard integration for navigating the various interfaces we encounter, so that it's not restricted to the mouse? (i.e. being able to use the arrow keys to scroll up and down on something like a kiosk screen [19:40]
They want to design their UI so that it's much more keyboard-friendly. Right now you can only use the mouse, which is rubbish, because using the keyboard can at times be faster. They add that making the UI more keyboard-friendly can simplify it, which is generally good because then it's more likely to work well across other input devices too, such as a gamepad. The MFDs should be the first bit of UI to feature the more uniform control method that caters for most people.
Q07) In a previous show they mentioned that they were moving away from the Flash and Scaleform stuff for HUD UI, in favour of a homegrown solution. What progress have they made with this? [22:18]
(Scaleform is a game implementation of Flash - Chris talked about this earlier this year, which was my first summary. Go here: https://www.reddit.com/starcitizen/comments/b75cw3/a_summary_of_rtv_all_about_alpha_35/ then scroll down to [UI] )
TL;DR They still use Flash, but are working towards transitioning away from it, where instead they'll use their own code in a data-driven system. They still use Scaleform, but only for rendering, and that too will eventually be replaced with their own code. Moving away from Flash and Scaleform makes it quicker and easier for them to develop UIs, because Flash is outdated, time extensive, and it makes iterative development difficult due to not being able to see how something looks in-game as they're working on it.
They used Flash and are still using it now, but they're trying to transition away from it by baking their assets into a much more data-driven system. Previously in Flash, you set things up in a static way where you can't see how it looks in-game until you export it and reload the editor. With the data-driven system, the interface with the game code is much more simplified, and they have a standard API so that a task such as creating UI for an ammo counter is really simple and updates live. This means that they can be in-game and have an editor open and work on the UI at the same time, to see what it looks like as they work on it. It's still using Scaleform as a renderer, meaning that they've cut out the process of authoring the UI in Flash, but they're still using Scaleform to draw the vectors but only for that rendering task, and the rest of the work is done by their own code instead. At some point though, they'll build their own render to replace Scaleform. Jared jokingly asks Zane how long he's been waiting to kill Flash, who says he's been waiting since he started working at the company, 6+ years ago (for those who many not know, Zane was an early hire straight out of college and originally worked in the Austin office, which back then was just a house, and this was also during the days of Wingman's Hangar). Simon adds to the discussion by explaining that Flash was originally designed as an animation system for web stuff, so it can be used for things like UI but it's hard to do iterative work with it where things change based upon feedback, which is because it's time extensive. Conversely, with the building blocks stuff it's quick and simple and they have a lot of control. Zane goes on to say that that's also true because it's a fully data-driven system, so the UI is programmatically drawn and driven from data so they don't worry about artists stepping on each others toes, because the changes the artists make can be merged together.
Q08) Has there been any discussion about adding a compass ribbon, giving us cardinal directions on planets or moons? [27:24]
TL;DR There's no outright yes-or-no answer. It's something they may consider, but it's also possible that there are other solutions to the problem, such as a personal radar on the visor HUD. They recognise though that having a compass would require being able to set magnetic poles on the planets, and for the compass to then be able to access that information and display it, which might not be so simple due to the procedural nature of the planets.
According to Simon, this is another situation where you need to understand the reason for wanting it, and that there may be other ways of solving the problem without creating a compass widget. He suggests that a personal radar or mini-map could show the Player their direction. However, he says that as they revise the visor HUD (which they're starting at the moment) if it seems that a compass is necessary then they'd consider and investigate it. Zane adds that it's not out of the question, but they would need a way to define what's North/East/South/West on these procedural planets, with Jared suggesting they'd have to be able to create and position magnetic poles. Simon suggests that some sort of Sat-Nav system may also solve the problem. Jared adds though that implementing a compass ribbon would be more than just UI work anyway, as it would need to involve system design.
(it seems odd to me that they don't seem to recognise the value of having a compass, particularly for FPS situations where it can be incredibly helpful to be able to say "contact at 220" and for other Players to be able to quickly identify the location of those hostile targets - of course, if it's just not possible then okay fine, but perhaps then there might be partial solutions instead, such as a compass that Players would have to manually adjust per Planet/location)
Q09) Is it possible to have a button to hide/toggle the HUD? (such as for taking screenshots) [30:05]
TL;DR They imply that it's possible, and say that as they're going through the different UIs, they'll also end up improving the Director Cam system, and so they'll look into including a way to toggle on/off certain bits of the UI, if not all of it. They reiterate though that they really only want UI information that's relevant to the Player at that time to appear on the screen, with the rest minimising or disappearing until they're needed again.
They get a lot of Developers asking about how to do this. Zane says that as they overhaul their UI, they'll be improving the Director Cam system, so it's something they'll take into account at that time, especially since it'd also be helpful for the Devs. He suggests though that it could go further, such as being able to choose what you want to hide, like only the visor HUD, or maybe to hide all of the UI but not what's in the environment that brings it to life - like "background fluff screens". Simon adds that for the general UI, especially the visor HUD for FPS gameplay, they want to have a system which only shows you the UI that's relevant for you at the time, so if you put your weapon away you wouldn't need to see the weapon UI in the corner of your screen, or maybe you wouldn't show Health unless you get injured. This kind of work, which will result in only showing things when they're needed, should also help to de-clutter the screen.
Q10) Are there any plans to allow a Chat UI to be viewed when not wearing a helmet? (such as through a contact lens or something) [32:13]
Yes. It's vital to always be able to see the Chat in a multiplayer game, and they do have plans for it to be visible almost all of the time, even potentially in third person, and they'd "like to have that in sometime soon".
Q11) What progress have they made on the interio3D/mini-map? [32:59]
They have a developer version that's kinda halfway there at the moment, but fairly recently they made a decision to focus on getting the ship HUDs and the FPS HUDs sorted out first, because they're more integral to the overall gameplay and therefore they want to make sure they get them right and working well. After this, they'd then go back to the area map stuff, which will hopefully be "really soon". Simon clarifies that there'll be a full-screen version where you can look around the whole area, and a mini-map for the visor, which will be particularly useful when exploring interiors.
Q12) Why does Quantum Calibration mode, Scanning mode, Mining mode, and any other sub-UI mode, take away or hide crucial flight information such as speed and altitude? [34:28]
TL;DR Essentially they were developed separately, and it wasn't their intention for crucial flight information to be removed when using those modes. Whilst they don't say it specifically, the new UI tech will help them to make sure that that information the Players need will be there, due to it being data-driven rather than a static UI.
They were basically developed under the hood as "different contexts", and in their overhaul of the design they're factoring in all of the flight information so that it'll be available to Players regardless of what mode they're in, because it's still relevant when they're still flying, and therefore the information should be retained. In these modes referenced in the question, they're looking at potentially contextually changing out the "screens of cells" so that rather than the HUD changing for the different modes, they can shift elements around so that relevant information can still be displayed, rather than the new HUD for that mode just taking up the whole screen. Jared adds that it wasn't intentional to take away crucial information when using these modes, and that it's just something that's happened over the course of development that needs resolving. Simon adds that sometimes you don't realise it's going to cause issues until you try it, and this is one of those situations. Jared goes on to say that things can be developed in isolation, and then when they're integrated together into their game-dev branch, that's when they can see collisions and thus the creation of bugs.
Q13) Are there any updates regarding their plans for the landing UI improvements which are needed for the implementation of Hover mode? [36:45]
TL;DR The previous UI they had implemented, typically seen prior to 3.0, used a different renderer (3Di) and now they use Render-to-Texture. As a result, it's not long compatible. They need to recreate this landing UI, but they're busy focusing on the MFDs right now, and they recognise they'll probably need some other stuff, such as a guidance system and AR elements.
A while back they changed the method that they used to render the UI, from what was called 3Di to Render-to-Texture, so now it's actually rendered as part of the screens and can be affected by post-effects. The original landing UI replaced the radar, which was built using 3Di and therefore wasn't compatible with Render-to-Texture, and that's why it was removed. They're now looking at bringing it back in some way, but maybe with a better design (this was a bit confusing, regarding whether Zane meant that the "original landing UI" or the "radar" was built using 3Di. I think all he's getting at is that the 3D representation we had pre-3.0 that was used for landing, was built using 3Di but wasn't compatible any more when they switched to RtT - this was addressed in Q03 of the previous episode of SCL. Here's my summary for that: https://www.reddit.com/starcitizen/comments/ca7xxy/a_summary_of_star_citizen_live_all_about/ ). They're focusing on the structure and the layout of the MFDs right now, and also recognise potentially needing some sort of guidance system, as well as having some Augmented Reality elements that are displayed in conjunction with that.
Q14) How do they intend to improve the legibility of UI elements that tend to sit over the environment, which can be very glaring, making the UI hard or even impossible to read? [38:15]
TL;DR The solution they're aiming for involves keeping the UI in-world. They'll have the UI displayed on geometry, and then that geometry can be dynamically tinted depending on the environment. At the same time, the text/info can be dynamically brightened to make sure it's still readable. They may also be able to use some sort of effect to achieve the same kind of goal, such as a blurred frosted glass effect. They can consider a back-shadow or black highlight, but they're concerned that it will conflict too much with their aesthetic aims.
They're looking at a few in-world solutions. The obvious thing to do is to add a drop-shadow to the UI or just make it black, but that somewhat destroys the aesthetic of it. So what they're looking at, which they started looking at with the Gladius but isn't finished yet, is having a system where it's contextually and dynamically reading the brightness of the environment and adjusting the brightness of the UI in response to that. Additionally, to make sure the HUDs look like they exist in-world, they want them displayed on actual geometry to ground it, and they can leverage that to maybe dynamically tint the geometry that the UI is on, as well as then brightning the UI if needs be (depending on the environmental conditions). This solution is ideal because of being in-world (and thus not hindering immersion) but also because it leverages the in-game elements, making it more convincing. Jared asks for clarification, and Zane specifies that it'd involve tinting the physical glass pane but then also brighting the UI, like if you have your phone on automatic brightness and go outside into the sunlight, it'll auto-adjust to make it more readable (This is a thing?! My phone must be old). He adds that it's also an issue with the eye adaptation feature (where the Player's "eyes" adjust depending on how bright it is), because the UI becomes dim when you're on a planet during the daytime, as compared to being lit by the sun in space. They could potentially also have some sort of effect in the UI rendering tech, such as a blurred frosted glass effect, that could help with readability (particularly for the visor HUD in your helmet), and the same is true for busy backgrounds and not just bright ones. Regarding a potential back-shadow, or a black highlight around the words and numbers on the UI (as often suggested by backers) it's definitely something they can consider but it'll depend on how subtle it can or can't be to work, because that might not fit with the aesthetic they're aiming for, which would therefore require them looking for a different solution. Simon adds that as with a lot of the UI, they'll concept different ideas to figure out which is the best way to solve the problem, before committing to implement something.
Q15) Currently the MFDs on ships have a default configuration that must be changed each time the pilot enters the pilot seat. Are there intentions to add the ability to save MFD configuration presets of an individual Player's preferences? [43:24]
TL;DR Yes, and this is something that has to persist. The work they're doing on the MFDs will require them to load their state from something like an entity or the server. They're hoping that by the time they're done with the MFDs, that info will be available in those places (likely the server). They'll also be redesigned to have the most important information displayed by default, and hopefully it'll be possible to create and save presets for quick activation.
Yes, that's got to persist at some point, and the issue right now is just that it doesn't. In their new UI tech, the UI will be what they call "stateless", meaning it won't store any state about itself and instead takes everything from an entity or from what's on the server. As such, when they develop the MFD or implement the new design they're working on, they hope they'll be able to persist the current state of the MFDs as they were before the Player exited the seat/cockpit, even if the Player had changed tabs or moved things around. Simon adds that when they do this pass on the MFDs, they want to make sure that the most important information is shown by default, so hopefully there'll be less need to change things around. Zane adds that potentially they'll also be able to make it so that presets could be created, saved, and then activated quickly (he actually just said the "activated" part but that implies creating and saving presets).
Q16 ) Is there anything they can tell us about the ongoing process of refactoring the ship MFDs? [45:45]
TL;DR They want to move to a system where, when you're looking forwards, the MFDs will display a minimal configuration of information that is readable and useful whilst you're flying, but then they can show more in-depth information if you specifically focus on the screens. Additionally, the new UI tech they're developing (as part of moving away from Flash and Scaleform) will allow them to have just one binary file that they need to make changes in, which makes it easier to maintain the UIs.
Right now the MFDs are small, scaled down, and not readable. Previously they used to have what they called "support screens" which were screens with minimal information on them, with a font size that made the information more legible. They're looking to have a system where by default, the MFDs will be in this minimised configuration where they only show the information that you really need to know, and they do so in a way that's readable without focusing on the display. However, when you then focus on the display, they want it to contextually change to something more in-depth, which can work because now the MFD has more screen-space to show readable information. Zane adds that the cool thing about the UI tech he's helping to develop, is that they're taking cues from web development (which is also his background so he knows a lot about it) where there's a thing called "responsive design". This is where you can have a rule set up so that, if there's a box in their UI that goes beyond a certain point, it then shrinks down, and you can have different styles applied to that, and conditionally so depending on the size of the box. As such they're leveraging that to help with the reformulation of the UI on screens, and it could also be helpful when they potentially implement customisation of HUDs/UIs as a tool to manage and maintain it. Right now in-game they have different sized screens, where each size has its own binary file, meaning that if they want to change one then they have to go into each binary file and make a change. But if they can maintain just one UI, which then has different style rules applied to it, then that makes it much easier to maintain. So changing one thing would then make that change for each different manufacturer, and every kind of configuration. Simon adds that regarding the actual process right now, they're looking through the designs that already exist in-game and working closely with the vehicles team to figure out what they want to show on the screens, to plan out what's going to be in all of the MFDs, so that they can then redesign each screen to achieve its maximum potential based upon what information needs to be shown. After that, the UI tech will eventually reach a point where the screen's are redesigned and the tech's ready to be put into the game.
Q17) Currently Players have to go to a kiosk to view the cargo inventory for their ships. Are there any plans to implement some sort of on-ship cargo UI so that Players can view their cargo inventory without finding a kiosk to do that? [50:18]
It's something that they will look at; they know that it's needed. What they're unsure of is when they'll get around to doing it (again, it comes down to prioritising and there'll be higher priorities right now, such as the HUD reworks they've extensively talked about so far).
Q18) Are there any plans to allow Players to prioritise the use of missiles or torpedoes through the MFDs? [50:58]
This is another thing they're going to look at. They're under the impression that they had this functionality previously, but it later broke. Simon adds that the UI does currently support this functionality, but that there's some refactoring that needs doing to get the weapons to "match up". It's something they want to do, which will be possible in the future, and with a better design.
Q19) Are there any plans to allow Players to see Points-of-Interest in other UI modes? Right now they're only view-able in the Quantum Drive mode. [51:41]
It's something that Simon's interested in doing, although he says it relates back to how they're going to manage what information is being shown, when it's being shown, and how. If they get to a point where the on-screen icons have been cut down to a sensible level, then they could consider whether it's worth having PoIs visible in other modes as well, and thus it'd be something that's worth having them look into. He says it's definitely the sort of thing you'd want to try out as you're developing it so that it can be iterated upon.
Q20) Would it be possible to have an ETA marker to show when a ship in Quantum Travel will arrive at its destination, rather than just showing the remaining distance? [52:26]
They think that this is a good idea, and so they'll be looking into it. Zane adds that they have an ETA for when a Player's Oxygen runs out, so they should be able to have one for QT.
(side note: shouldn't Oxygen/O2 in-game be Air instead? :thinking: )
Q21) How do they feel about the current implementation of the Inner Thought system, and are there any plans to continue iterating on it? [53:07]
TL;DR There's an issue where the Inner Thought system displays text when it's not necessary to do so, such as you're using an airlock and the text that exists on the console then also appears in the form of Inner Thought - this is unnecessary and needs to be resolved, which it soon will be thanks to their new UI tech. They'll also eventually revisit the visuals of the system, to make it look as good as possible. They also hint again that there's some UI WIP that isn't ready to show yet, but might be shown in a Q3 ISC episode.
Zane says that there are situations where the Inner Thought text appears when it shouldn't, such as over screens. A good example of this is when a Player is in an airlock and goes to use the console, and the Inner Thought text then appears over the console despite that same information already existing on the console (it's the same thing as when the door panel reads "Open" but then when you go to use it, the Inner Thought text appears on top of that as well). Their UI tech now allows for not having the IT text appear, which is particularly useful for things like elevators where the required information can be on or next to the buttons, without needing to use Inner Thought. They'll also be looking at the visuals of Inner Thought too, because although it looks okay now they feel it could be better. On a similar note, there's some other work they're doing at the moment on interactions that they're still figuring out, but it's something that's not quite ready to show just yet (Jared already said in the answer to Q03 of this episode that the Q3 ISC episodes will include some more looks at ongoing UI work, so it's possible that this will be shown as part of that).
Q22) A long time ago they talked about the potential of manufacturer-specific UIs. Is that still the plan? [54:39]
TL;DR Yes it is, and their new UI tech makes it even more possible, because it'll mean they don't have to have a binary file per manufacturer, but just one binary file for everything. They then have a "style sheet system" which allows them to have a white box outline for UI, which can then have different designs applied to it, and is a lot more simple than what it would otherwise be if working with Flash. They also talk about investigating the possibility of creating 3D UIs, which will mostly be used for the more advanced ships, like those from Origin or MISC.
Yes, and it's much more possible now with the UI tech because they have a "style sheet system". Previously (or currently?) this would require having a binary file for each manufacturer, which would be a pain to maintain, but the new UI tech (as mentioned previously) will allow for only one file so that only one change would need to be made to affect everything across the board. Zane explains further that the style sheet system is kind of like having a white box outline which can then have a visual description defined and applied to it, and changing between the different styles is simple because they can just use a drop-down menu to switch between manufacturers, and then see the visual description change between them. Simon adds that once the system is in place it gives them more opportunities to hand it over to the graphic designers who can create really nice designs which would then be a lot easier to just drop into the game, as opposed to being dependent on someone going into Flash and knowing how to code within Flash. Zane adds that with these style rules there are a lot of possibilities to differentiate between manufacturers, but also there are ways to do this through changing the layout, such as Origin and MISC having more holographic UIs. They're also investigating the initial engineering requirement to make it so that they can have 3D UIs as well, which would make holographic UIs look even more holographic. This would be particularly good for the more advanced ships, rather than the more retro ones. Zane comments about how right now every ship just has the retro UI, and that they want to significantly differentiate between the different tech levels of ships.
Q23) The responsiveness of the MobiGlas can sometimes be a little slow. Is this an engineering problem? A UI problem? Something else? [58:18]
Zane reckons that the time it takes for the arm animation to play, as well as how long it takes for the MobiGlas to boot-up, could be reduced, but they're not focused on MobiGlas at the moment. He does reiterate though that they're looking to overhaul the whole UI (which would most likely include the MobiGlas). They've just got to set a target time for how long it should take between the Player pressing the button to open the MobiGlas, and the MobiGlas being open and ready to use. Simon adds that because the MobiGlas is supposed to be a holographic display, they could have that display start to show before the Player's arm has finished moving.
Q24) Is there anything you can tell us about the future of MobiGlas? (despite it not being the focus right now) [59:36]
It's kinda similar to what they're doing with the ship MFDs. Once the ship and visor UIs are done, they'll probably look at the MobiGlas, and part of that will likely involve talking to the game designers to make sure that the MobiGlas works as is needed, and they can also incorporate the new tech at the same time. It's due an overhaul though, and Simon's looking forward to it. Zane adds that because the MobiGlas is 3D, it also depends on the UI tech being able to do 3D UIs as well, which will need to be sorted out before they can make the MobiGlas holographic UI 3D.
Q25) Is the UI team hiring, and what skills are needed most? [01:00:57]
Yes. The job specs on the website are slightly out of date though and they'll update them soon. They're looking for at least a programmer. They're not currently looking for artists and graphic designers, but that could change in the future. For programmers, the essential things they're looking for are an ability to show experience, and having a knowledge of what makes good UI, such as why things work in other games. For artists they look at a lot of graphic design work because of how relevant that is to UI work, but they also look for an understanding of why a particular screen might be good on a particular app, or how it could be improved. Zane adds that it'd be helpful to have a tools programmer as well. He goes on to say that because the UI is becoming data-driven, that means they're actually dealing with a lot of raw data. As a result, they need to create a UI Editor that the designers and artists will interface with, which would need to be intuitive and easy to use, so a tools programmer that could help with that would be very handy.
Here's a link to CIG's Jobs page: https://cloudimperiumgames.com/join-us
- - - - -
The End. This one's a little later than usual 'cause I've been busy and shit. I wasn't even sure I'd get it done for today so I'm glad it worked out.
As always, I hope you all like this summary. Remember to be kind to each other, and I'll see you with the next one.
submitted by Nauxill to starcitizen [link] [comments]

Binary Options Profit Balance $2,454 21 - How to use signals correctly-binary 2020 Binary Option Robot THE TRUTH ABOUT BINARY OPTIONS How to Use binary.com copy trading Do Binary Options Robots Work - YouTube

Binary Option Trading is a Gamble – What are the odds to strike it rich with Money Magnet? I talked to several professional traders and asked their opinion about Binary Options. The consensus: Binary trading is unpredictable and even more risky than playing the stock market in the traditional way. Binary Options Magnet – How I Changed My Binary Options Magnet. Does It Really Work? Binary Options Magnet System 3 Systems to Make Your First $1000 in Forex Trading. There are no doubts that trading in the Forex market provides excellent opportunities to make some extra money. But is it really possible to just ride your luck and take a Binary Options Magnet – How I Changed My Binary Options Magnet. Does It Really Work? Binary Options Magnet Review. Binary Options Magnet Investing Strategies. Something happened a few days ago that changed how I think about Forex investing. I used to “think” I was getting expert advice whenever I signed up for an online trading program or Binary Options Magnet – Does It Really Work? 1. Binary Options Magnet – Does It Really Work?How I Changed My Binary Options Magnet. Does It Really Work?The foreign exchange market is a tricky thing. Tobecome an expert, you need to spend yearspracticing, making trades and learning how to notjust analyze conditions but “feel” them. Binary Options magnet – How I Changed My Binary Options magnet. Does It actually Work? certain thing happened a couple of days ago that altered how I believe about Forex investing. I used to “think” I was getting professional recommendations when I signed up for an online selling program or a robot service, but as my constant deficiency

[index] [25571] [30491] [11027] [29703] [30482] [5510] [22196] [20644] [13516] [6775]

Binary Options Profit Balance $2,454 21 - How to use signals correctly-binary 2020

https://tr.im/fi7cx "Quit Your Day Job" Review - Discover how this trading software works for me sofar! I´m trading binary options for about 2 years now, first without any success, but in the ... Binary Options Profit Balance $2,454 21 - How to use signals correctly-binary 2020 Do not miss! ... Binary Options Profit Balance $2,454 21 - How to use signals correctly-binary 2020 Do not miss ... I hope you like it and it will work for you. So, look this binary options video carefully. Do not forget to subscribe to my channel with binary options strategies 2020 and if you want to see my ... how binary options work, how does binary options work, redwood binary options withdrawal, how do binary options work, trading binary options with nadex, trading binary options with bollinger bands, Best Binary Options Strategy 2020 - 2 Minute Strategy LIVE TRAINING! - Duration: 43:42. BLW Online Trading 130,417 views. 43:42.

Flag Counter