Binary Options Tester

The Next Processor Change is Within ARMs Reach

As you may have seen, I sent the following Tweet: “The Apple ARM MacBook future is coming, maybe sooner than people expect” https://twitter.com/choco_bit/status/1266200305009676289?s=20
Today, I would like to further elaborate on that.
tl;dr Apple will be moving to Arm based macs in what I believe are 4 stages, starting around 2015 and ending around 2023-2025: Release of T1 chip Macbooks, release of T2 chip Macbooks, Release of at least one lower end model Arm Macbook, and transitioning full lineup to Arm. Reasons for each are below.
Apple is very likely going to switch to switch their CPU platform to their in-house silicon designs with an ARM architecture. This understanding is a fairly common amongst various Apple insiders. Here is my personal take on how this switch will happen and be presented to the consumer.
The first question would likely be “Why would Apple do this again?”. Throughout their history, Apple has already made two other storied CPU architecture switches - first from the Motorola 68k to PowerPC in the early 90s, then from PowerPC to Intel in the mid 2000s. Why make yet another? Here are the leading reasons:
A common refrain heard on the Internet is the suggestion that Apple should switch to using CPUs made by AMD, and while this has been considered internally, it will most likely not be chosen as the path forward, even for their megalithic giants like the Mac Pro. Even though AMD would mitigate Intel’s current set of problems, it does nothing to help the issue of the x86_64 architecture’s problems and inefficiencies, on top of jumping to a platform that doesn’t have a decade of proven support behind it. Why spend a lot of effort re-designing and re- optimizing for AMD’s platform when you can just put that effort into your own, and continue the vertical integration Apple is well-known for?
I believe that the internal development for the ARM transition started around 2015/2016 and is considered to be happening in 4 distinct stages. These are not all information from Apple insiders; some of these these are my own interpretation based off of information gathered from supply-chain sources, examination of MacBook schematics, and other indicators from Apple.

Stage1 (from 2014/2015 to 2017):

The rollout of computers with Apple’s T1 chip as a coprocessor. This chip is very similar to Apple’s T8002 chip design, which was used for the Apple Watch Series 1 and Series 2. The T1 is primarily present on the first TouchID enabled Macs, 2016 and 2017 model year MacBook Pros.
Considering the amount of time required to design and validate a processor, this stage most likely started around 2014 or 2015, with early experimentation to see whether an entirely new chip design would be required, or if would be sufficient to repurpose something in the existing lineup. As we can see, the general purpose ARM processors aren’t a one- trick pony.
To get a sense of the decision making at the time, let’s look back a bit. The year is 2016, and we're witnessing the beginning of stagnation of Intel processor lineup. There is not a lot to look forward to other than another “+” being added to the 14nm fabrication process. The MacBook Pro has used the same design for many years now, and its age is starting to show. Moving to AMD is still very questionable, as they’ve historically not been able to match Intel’s performance or functionality, especially at the high end, and since the “Ryzen” lineup is still unreleased, there is absolutely no benchmarks or other data to show they are worth consideration, and AMD’s most recent line of “Bulldozer” processors were very poorly received. Now is probably as good a time as any to begin experimenting with the in-house ARM designs, but it’s not time to dive into the deep end yet, our chips are not nearly mature enough to compete, and it’s not yet certain how long Intel will be stuck in the mud. As well, it is widely understood that Apple and Intel have an exclusivity contract in exchange for advantageous pricing. Any transition would take considerable time and effort, and since there are no current viable alternative to Intel, the in-house chips will need to advance further, and breaching a contract with Intel is too great a risk. So it makes sense to start with small deployments, to extend the timeline, stretch out to the end of the contract, and eventually release a real banger of a Mac.
Thus, the 2016 Touch Bar MacBooks were born, alongside the T1 chip mentioned earlier. There are good reasons for abandoning the piece of hardware previously used for a similar purpose, the SMC or System Management Controller. I suspect that the biggest reason was to allow early analysis of the challenges that would be faced migrating Mac built- in peripherals and IO to an ARM-based controller, as well as exploring the manufacturing, power, and performance results of using the chips across a broad deployment, and analyzing any early failure data, then using this to patch any issues, enhance processes, and inform future designs looking towards the 2nd stage.
The former SMC duties now moved to T1 includes things like
The T1 chip also communicates with a number of other controllers to manage a MacBook’s behavior. Even though it’s not a very powerful CPU by modern standards, it’s already responsible for a large chunk of the machine’s operation. Moving control of these peripherals to the T1 chip also brought about the creation of the fabled BridgeOS software, a shrunken-down watchOS-based system that operates fully independently of macOS and the primary Intel processor.
BridgeOS is the first step for Apple’s engineering teams to begin migrating underlying systems and services to integrate with the ARM processor via BridgeOS, and it allowed internal teams to more easily and safely develop and issue firmware updates. Since BridgeOS is based on a standard and now well-known system, it means that they can leverage existing engineering expertise to flesh out the T1’s development, rather than relying on the more arcane and specialized SMC system, which operates completely differently and requires highly specific knowledge to work with. It also allows reuse of the same fabrication pipeline used for Apple Watch processors, and eliminated the need to have yet another IC design for the SMC, coming from a separate source, to save a bit on cost.
Also during this time, on the software side, “Project Marzipan”, today Catalyst, came into existence. We'll get to this shortly.
For the most part, this Stage 1 went without any major issues. There were a few firmware problems at first during the product launch, but they were quickly solved with software updates. Now that engineering teams have had experience building for, manufacturing, and shipping the T1 systems, Stage 2 would begin.

Stage2 (2018-Present):

Stage 2 encompasses the rollout of Macs with the T2 coprocessor, replacing the T1. This includes a much wider lineup, including MacBook Pro with Touch Bar, starting with 2018 models, MacBook Air starting with 2018 models, the iMac Pro, the 2019 Mac Pro, as well as Mac Mini starting in 2018.
With this iteration, the more powerful T8012 processor design was used, which is a further revision of the T8010 design that powers the A10 series processors used in the iPhone 7. This change provided a significant increase in computational ability and brought about the integration of even more devices into T2. In addition to the T1’s existing responsibilities, T2 now controls:
Those last 2 points are crucial for Stage 2. Under this new paradigm, the vast majority of the Mac is now under the control of an in-house ARM processor. Stage 2 also brings iPhone-grade hardware security to the Mac. These T2 models also incorporated a supported DFU (Device Firmware Update, more commonly “recovery mode”), which acts similarly to the iPhone DFU mode and allows restoration of the BridgeOS firmware in the event of corruption (most commonly due to user-triggered power interruption during flashing).
Putting more responsibility onto the T2 again allows for Apple’s engineering teams to do more early failure analysis on hardware and software, monitor stability of these machines, experiment further with large-scale production and deployment of this ARM platform, as well as continue to enhance the silicon for Stage 3.
A few new user-visible features were added as well in this stage, such as support for the passive “Hey Siri” trigger, and offloading image and video transcoding to the T2 chip, which frees up the main Intel processor for other applications. BridgeOS was bumped to 2.0 to support all of these changes and the new chip.
On the macOS software side, what was internally known as Project Marzipan was first demonstrated to the public. Though it was originally discovered around 2017, and most likely began development and testing within later parts of Stage 1, its effects could be seen in 2018 with the release of iPhone apps, now running on the Mac using the iOS SDKs: Voice Recorder, Apple News, Home, Stocks, and more, with an official announcement and public release at WWDC in 2019. Catalyst would come to be the name of Marzipan used publicly. This SDK release allows app developers to easily port iOS apps to run on macOS, with minimal or no code changes, and without needing to develop separate versions for each. The end goal is to allow developers to submit a single version of an app, and allow it to work seamlessly on all Apple platforms, from Watch to Mac. At present, iOS and iPadOS apps are compiled for the full gamut of ARM instruction sets used on those devices, while macOS apps are compiled for x86_64. The logical next step is to cross this bridge, and unify the instruction sets.
With this T2 release, the new products using it have not been quite as well received as with the T1. Many users have noticed how this change contributes further towards machines with limited to no repair options outside of Apple’s repair organization, as well as some general issues with bugs in the T2.
Products with the T2 also no longer have the “Lifeboat” connector, which was previously present on 2016 and 2017 model Touch Bar MacBook Pro. This connector allowed a certified technician to plug in a device called a CDM Tool (Customer Data Migration Tool) to recover data off of a machine that was not functional. The removal of this connector limits the options for data recovery in the event of a problem, and Apple has never offered any data recovery service, meaning that a irreparable failure of the T2 chip or the primary board would result in complete data loss, in part due to the strong encryption provided by the T2 chip (even if the data got off, the encryption keys were lost with the T2 chip). The T2 also brought about the linkage of component serial numbers of certain internal components, such as the solid state storage, display, and trackpad, among other components. In fact, many other controllers on the logic board are now also paired to the T2, such as the WiFi and Bluetooth controller, the PMIC (Power Management Controller), and several other components. This is the exact same system used on newer iPhone models and is quite familiar to technicians who repair iPhone logic boards. While these changes are fantastic for device security and corporate and enterprise users, allowing for a very high degree of assurance that devices will refuse to boot if tampered with in any way - even from storied supply chain attacks, or other malfeasance that can be done with physical access to a machine - it has created difficulty with consumers who more often lack the expertise or awareness to keep critical data backed up, as well as the funds to perform the necessary repairs from authorized repair providers. Other issues reported that are suspected to be related to T2 are audio “cracking” or distortion on the internal speakers, and the BridgeOS becoming corrupt following a firmware update resulting in a machine that can’t boot.
I believe these hiccups will be properly addressed once macOS is fully integrated with the ARM platform. This stage of the Mac is more like a chimera of an iPhone and an Intel based computer. Technically, it does have all of the parts of an iPhone present within it, cellular radio aside, and I suspect this fusion is why these issues exist.
Recently, security researchers discovered an underlying security problem present within the Boot ROM code of the T1 and T2 chip. Due to being the same fundamental platform as earlier Apple Watch and iPhone processors, they are vulnerable to the “checkm8” exploit (CVE-2019-8900). Because of how these chips operate in a Mac, firmware modifications caused by use of the exploit will persist through OS reinstallation and machine restarts. Both the T1 and T2 chips are always on and running, though potentially in a heavily reduced power usage state, meaning the only way to clean an exploited machine is to reflash the chip, triggering a restart, or to fully exhaust or physically disconnect the battery to flush its memory. Fortunately, this exploit cannot be done remotely and requires physical access to the Mac for an extended duration, as well as a second Mac to perform the change, so the majority of users are relatively safe. As well, with a very limited execution environment and access to the primary system only through a “mailbox” protocol, the utility of exploiting these chips is extremely limited. At present, there is no known malware that has used this exploit. The proper fix will come with the next hardware revision, and is considered a low priority due to the lack of practical usage of running malicious code on the coprocessor.
At the time of writing, all current Apple computers have a T2 chip present, with the exception of the 2019 iMac lineup. This will change very soon with the expected release of the 2020 iMac lineup at WWDC, which will incorporate a T2 coprocessor as well.
Note: from here on, this turns entirely into speculation based on info gathered from a variety of disparate sources.
Right now, we are in the final steps of Stage 2. There are strong signs that an a MacBook (12”) with an ARM main processor will be announced this year at WWDC (“One more thing...”), at a Fall 2020 event, Q1 2021 event, or WWDC 2021. Based on the lack of a more concrete answer, WWDC2020 will likely not see it, but I am open to being wrong here.

Stage3 (Present/2021 - 2022/2023):

Stage 3 involves the first version of at least one fully ARM-powered Mac into Apple’s computer lineup.
I expect this will come in the form of the previously-retired 12” MacBook. There are rumors that Apple is still working internally to perfect the infamous Butterfly keyboard, and there are also signs that Apple is developing an A14x based processors with 8-12 cores designed specifically for use as the primary processor in a Mac. It makes sense that this model could see the return of the Butterfly keyboard, considering how thin and light it is intended to be, and using an A14x processor would make it will be a very capable, very portable machine, and should give customers a good taste of what is to come.
Personally, I am excited to test the new 12" “ARMbook”. I do miss my own original 12", even with all the CPU failure issues those older models had. It was a lovely form factor for me.
It's still not entirely known whether the physical design of these will change from the retired version, exactly how many cores it will have, the port configuration, etc. I have also heard rumors about the 12” model possibly supporting 5G cellular connectivity natively thanks to the A14 series processor. All of this will most likely be confirmed soon enough.
This 12” model will be the perfect stepping stone for stage 3, since Apple’s ARM processors are not yet a full-on replacement for Intel’s full processor lineup, especially at the high end, in products such as the upcoming 2020 iMac, iMac Pro, 16” MacBook Pro, and the 2019 Mac Pro.
Performance of Apple’s ARM platform compared to Intel has been a big point of contention over the last couple years, primarily due to the lack of data representative of real-world desktop usage scenarios. The iPad Pro and other models with Apple’s highest-end silicon still lack the ability to execute a lot of high end professional applications, so data about anything more than video editing and photo editing tasks benchmarks quickly becomes meaningless. While there are completely synthetic benchmarks like Geekbench, Antutu, and others, to try and bridge the gap, they are very far from being accurate or representative of the real real world performance in many instances. Even though the Apple ARM processors are incredibly powerful, and I do give constant praise to their silicon design teams, there still just isn’t enough data to show how they will perform for real-world desktop usage scenarios, and synthetic benchmarks are like standardized testing: they only show how good a platform is at running the synthetic benchmark. This type of benchmark stresses only very specific parts of each chip at a time, rather than how well it does a general task, and then boil down the complexity and nuances of each chip into a single numeric score, which is not a remotely accurate way of representing processors with vastly different capabilities and designs. It would be like gauging how well a person performs a manual labor task based on averaging only the speed of every individual muscle in the body, regardless of if, or how much, each is used. A specific group of muscles being stronger or weaker than others could wildly skew the final result, and grossly misrepresent performance of the person as a whole. Real world program performance will be the key in determining the success and future of this transition, and it will have to be great on this 12" model, but not just in a limited set of tasks, it will have to be great at *everything*. It is intended to be the first Horseman of the Apocalypse for the Intel Mac, and it better behave like one. Consumers have been expecting this, especially after 15 years of Intel processors, the continued advancement of Apple’s processors, and the decline of Intel’s market lead.
The point of this “demonstration” model is to ease both users and developers into the desktop ARM ecosystem slowly. Much like how the iPhone X paved the way for FaceID-enabled iPhones, this 12" model will pave the way towards ARM Mac systems. Some power-user type consumers may complain at first, depending on the software compatibility story, then realize it works just fine since the majority of the computer users today do not do many tasks that can’t be accomplished on an iPad or lower end computer. Apple needs to gain the public’s trust for basic tasks first, before they will be able to break into the market of users performing more hardcore or “Pro” tasks. This early model will probably not be targeted at these high-end professionals, which will allow Apple to begin to gather early information about the stability and performance of this model, day to day usability, developmental issues that need to be addressed, hardware failure analysis, etc. All of this information is crucial to Stage 4, or possibly later parts of Stage 3.
The 2 biggest concerns most people have with the architecture change is app support and Bootcamp.
Any apps released through the Mac App Store will not be a problem. Because App Store apps are submitted as LLVM IR (“Bitcode”), the system can automatically download versions compiled and optimized for ARM platforms, similar to how App Thinning on iOS works. For apps distributed outside the App Store, thing might be more tricky. There are a few ways this could go:
As for Bootcamp, while ARM-compatible versions of Windows do exist and are in development, they come with their own similar set of app support problems. Microsoft has experimented with emulating x86_64 on their ARM-based Surface products, and some other OEMs have created their own Windows-powered ARM laptops, but with very little success. Performance is a problem across the board, with other ARM silicon not being anywhere near as advanced, and with the majority of apps in the Windows ecosystem that were not developed in-house at Microsoft running terribly due to the x86_64 emulation software. If Bootcamp does come to the early ARM MacBook, it more than likely will run like very poorly for anything other than Windows UWP apps. There is a high chance it will be abandoned entirely until Windows becomes much more friendly to the architecture.
I believe this will also be a very crucial turning point for the MacBook lineup as a whole. At present, the iPad Pro paired with the Magic Keyboard is, in many ways, nearly identical to a laptop, with the biggest difference being the system software itself. While Apple executives have outright denied plans of merging the iPad and MacBook line, that could very well just be a marketing stance, shutting the down rumors in anticipation of a well-executed surprise. I think that Apple might at least re-examine the possibility of merging Macs and iPads in some capacity, but whether they proceed or not could be driven by consumer reaction to both products. Do they prefer the feel and usability of macOS on ARM, and like the separation of both products? Is there success across the industry of the ARM platform, both at the lower and higher end of the market? Do users see that iPadOS and macOS are just 2 halves of the same coin? Should there be a middle ground, and a new type of product similar to the Surface Book, but running macOS? Should Macs and iPads run a completely uniform OS? Will iPadOS ever see exposed the same sort of UNIX-based tools for IT administrators and software developers that macOS has present? These are all very real questions that will pop up in the near future.
The line between Stage 3 and Stage 4 will be blurry, and will depend on how Apple wishes to address different problems going forward, and what the reactions look like. It is very possible that only 12” will be released at first, or a handful more lower end model laptop and desktop products could be released, with high performance Macs following in Stage 4, or perhaps everything but enterprise products like Mac Pro will be switched fully. Only time will tell.

Stage 4 (the end goal):

Congratulations, you’re made it to the end of my TED talk. We are now well into the 2020s and COVID-19 Part 4 is casually catching up to the 5G = Virus crowd. All Macs have transitioned fully to ARM. iMac, MacBooks Pro and otherwise, Mac Pro, Mac Mini, everything. The future is fully Apple from top to bottom, and vertical integration leading to market dominance continues. Many other OEM have begun to follow in this path to some extent, creating more demand for a similar class of silicon from other firms.
The remainder here is pure speculation with a dash of wishful thinking. There are still a lot of things that are entirely unclear. The only concrete thing is that Stage 4 will happen when everything is running Apple’s in- house processors.
By this point, consumers will be quite familiar with the ARM Macs existing, and developers have had have enough time to transition apps fully over to the newly unified system. Any performance, battery life, or app support concerns will not be an issue at this point.
There are no more details here, it’s the end of the road, but we are left with a number of questions.
It is unclear if Apple will stick to AMD's GPUs or whether they will instead opt to use their in-house graphics solutions that have been used since the A11 series of processors.
How Thunderbolt support on these models of Mac will be achieved is unknown. While Intel has made it openly available for use, and there are plans to have USB and Thunderbolt combined in a single standard, it’s still unclear how it will play along with Apple processors. Presently, iPhones do support connecting devices via PCI Express to the processor, but it has only been used for iPhone and iPad storage. The current Apple processors simply lack the number of lanes required for even the lowest end MacBook Pro. This is an issue that would need to be addressed in order to ship a full desktop-grade platform.
There is also the question of upgradability for desktop models, and if and how there will be a replaceable, socketed version of these processors. Will standard desktop and laptop memory modules play nicely with these ARM processors? Will they drop standard memory across the board, in favor of soldered options, or continue to support user-configurable memory on some models? Will my 2023 Mac Pro play nicely with a standard PCI Express device that I buy off the shelf? Will we see a return of “Mac Edition” PCI devices?
There are still a lot of unknowns, and guessing any further in advance is too difficult. The only thing that is certain, however, is that Apple processors coming to Mac is very much within arm’s reach.
submitted by Fudge_0001 to apple [link] [comments]

Mega Unpopular Opinion: Take-home projects can be great!

Ah, I have been debating whether or not I wanted to write this for a while now, but after seeing a few recent threads with 10-50 comments unanimously hating on take-home projects, I figured I would share my opinion.
Some of you may not read until the end, so let me preface this by saying not all take-home projects are great. I am on your side in that you should not complete a take-home project if any of the following are true:
...

With that being said, I will now move on to why I think take-home projects can be great.
For starters, it weeds out sooooo much of the competition. If you look at some job postings on LinkedIn, they can have 200+ applicants in 24 hours, and that is not even accounting for people who find the job via other means (i.e. other job boards, company website, etc). Thats a lot of applicants. Now, I know better than to assume that this subreddit is representative of the whole software industry, but clearly a take-home project potentially gets candidates TO WEED THEMSELVES OUT. So 200 candidates may have applied, but now you're competing with a significantly smaller percentage of people who actually wanted to take the time and do the take-home project. Your odds are much better now.
Now, I know exactly what you're thinking. You don't want to spend the 8-12 hours it would take to complete this take-home project, and you'd rather spend your time casting your net farther and shotgunning your resume out to more companies, but WHY NOT BOTH? You're the one looking for a job, and you are really not in the position to weed yourself out of potential employment. Some of you people have been on the hunt for a job for months and still won't stoop down to the level of giving a company that much time without being guaranteed another interview / a job. New-flash, doing a project increases your chance of getting a job, just like shotgunning your resume AND you get to practice / show off your programming skills (who knows, maybe mess around a make a project you can put on your GitHub as a sample of your work for other employers to see). On top of this, if you are someone with a lot of free-time - I'm looking at you new grads - and don't have a family/responsibilities that you need to take care of, then you really can't complain about time. Let's face it, instead of doing this project, you're watching Silicon Valley on HBO for the third time "to relax" after a "long day" of filling out the same Workday application forms. Come on, searching for a full time job should be a 40hwk job in and of itself.
My next point is that these take home projects sometimes substitute final/on-site interviews. Yea, those 5 hours interviews where you meet every hiring manager and their mother and get grilled round after round because you can't find the optimal solution for sorting a reverse binary search tree that is upside down, flipped, and cooked well done while someone is staring at you, asking questions, and forbidding you from using any resources you would have at your disposable in (almost) any given real world scenario. Yea, those are the real stress-inducing woes of the software interview process, and I would think people would want to avoid those at all costs. Anecdotally, the company that I started working for 3 months ago gave me the choice of a 4 1/2 zoom interview consisting of 4 one hour technical interviews with different hiring managers, or a take-home project that would take 6-10 hours with a 1 1/2 hour follow up discussing my project. The decision was so obvious - stress study an entire week before the interview (hint, this alone probably would take up more time than the take-home project, but on the other hand does prepare you for future interviews) and then endure the torture that is 4+ hours on a zoom call / in an office coding on a whiteboard, or spend about 1-2 hours a day for a week, with access to all resources, leisurely coding up a project, that if done correctly, increases your chance of getting a job astronomically. Not to mention, this option is becoming much more popular with COVID and WFH and the lack of being able to get candidates into the office.

All in all, I really wish more companies offered take-home projects as at least an option for their interview process. In my opinion, they are more informative for both parties, as it represents the work you will be doing if you were to get the job, and it is indicative of the level of effort and knowledge you possess in context of the position they are seeking to fill. I really wish everyone on here would stop spreading their hatred for take-home projects, especially to new grads who have never even done them. And for the love of god stop saying to bill the company for making you do a take-home project, that is just the silliest thing I have ever heard, and I DOUBT any company ever would reply to that kind of an invoice. If you really have that much adversity to them, just don't bother.

TL;DR: I believe some take-home projects are worth doing ¯\_(ツ)_/¯
submitted by Kixstander to cscareerquestions [link] [comments]

Ambrosia and Registration

Now that Ambrosia is gone, new registrations are no longer possible, and due to their expiring codes, using legitimate license keys has become difficult. We may hope to see a few of their games revived in the future but at present, only the original releases are available. Perhaps this case study on Ambrosia's registration algorithms will be useful to some.

The Old System

In their earliest days, ASW didn't require registration, but they eventually began locking core features away behind codes. All of their classic titles use the original algorithm by Andrew Welch.
Given a licensee name, number of copies, and game name, the code generator runs through two loops. The first loop iterates over each letter of the capitalized licensee name, adding the ASCII representation of that letter with the number of copies and then rotating the resulting bits. The second loop repeats that operation, only using the game's name instead of the license holder's name.
Beginning with Mars Rising, later games added a step to these loops: XOR the current code with the common hex string $DEADBEEF. However, the rest of the algorithm remained essentially unchanged.
The resulting 32 bits are converted into a text registration code by adding the ASCII offset of $41 to each hex digit. This maps the 32-bit string into 8 characters, but due to the limit of a hex digit to only encode 16 values, codes only contain letters from the first 16 of the alphabet.
The following chart shows an example using a well-known hacked code for Slithereens.
 Iteration 1 ('A' in ANONYMOUS) Name: Anonymous Code = $0 + $41 Number: 100 (hex: $64) -> << 6 ... -> Code = $FD53 FFA0 Game: Slithereens + $64 ^ $DEAD BEEF >> 1 Add $41 to each digit: Registration -> $41 + $F = $50 = P -> Reverse string -> ------------ $41 + $D = $4E = N | AKPPDFNP | ... ------------ 
Here is a Python implementation of the v1 system: aswreg_v1.py
Once you have the bitstring module installed via sudo pip install bitstring, you can test the output yourself with python aswreg_v1.py "Anonymous" 100 "Slithereens".

The New System

As Ambrosia's Matt Slot explains, the old system continued to allow a lot of piracy, so in the early 2000's they decided to switch to a more challenging registration system. This new method was based on polynomial hashing and included a timestamp so that codes could be expired and renewed. Ambrosia now had better control over code distribution, but they assumed their renewal server would never be shut down...
They also took more aggressive steps to reduce key sharing. The registration app checks against a list of blacklisted codes, and if found to be using one, the number of licenses is internally perturbed so that subsequent calculations fail. To combat tampering, your own information can get locally blacklisted in a similar manner if too many failed attempts occur, at least until the license file is deleted. Furthermore, the app attempts to verify the system time via a remote time server to minimize registration by changing the computer's clock.
You can disable the internet connection, set the clock back, and enter codes. There's also a renewal bot for EV: Nova. But let us look at the algorithm more closely.

64-bit Codes

The first noticeable difference is that registration codes in v2 are now 12 digits, containing both letters and numbers. This is due to a move from a 32-bit internal code to a 64-bit one. Rather than add an ASCII offset to hex digits, every letter or number in a new registration code has a direct mapping to a chunk of 5 bits. Using 5 bits per digit supports up to 32 values, or almost all letters of the alphabet and digits up to 9 (O, I, 0, and 1 were excluded given their visual similarities).
The resulting 64 bits (really only 60 because the upper 4 are unused: 12 digits * 5 bits each = 60) are a combination of two other hashes XOR'd together. This is a notable change from v1 because it only used the registration code to verify against the hashing algorithm. Only the licensee name, number of copies, and game name were really used. In v2, the registration code is itself a hash which contains important information like a code's timestamp.

Two Hashes

To extract such information from the registration code, we must reverse the XOR operation and split out the two hashes which were combined. Fortunately, XOR is reversible, and we can compute one of the hashes. The first hash, which I'll call the userkey, is actually quite similar to v1's algorithm. It loops through the licensee name, adding the ASCII value, number of copies, and shifting bits. This is repeated with the game name. An important change is including multiplication by a factor based on the string size.
The second hash, which I'll call the basekey, is the secret sauce of v2; it's what you pay Ambrosia to generate when registering a product. It is not computed by the registration app, but there are several properties by which it must be validated.
The chart below visualizes the relationships among the various hashes, using the well-known "Barbara Kloeppel" code for EV: Nova.
 TEXTCODE: ------------------ | L4B5-9HJ5-P3NB | ------------------ HASH1 (userkey): | calculated from licensee name, | copies, and game name BINCODE: ---------------------- 5 bits per character, /-> | 0x0902f8932acce305 | plus factors & rotation / ---------------------- ---------------------- / | 0x0008ecc1c2ee5e00 | <-- XOR ---------------------- \ \ ---------------------- \-> | 0x090a1452e822bd05 | ---------------------- HASH2 (basekey): generated by Ambrosia, extracted via XOR 

The Basekey

The basekey is where we must handle timestamps and several validation checks. Consider the binary representation of the sample 0x090a1452e822bd05:
binary basekey (above) and indices for reference (below): 0000 1001 0000 1010 0001 0100 0101 0010 1110 1000 0010 0010 1011 1101 0000 0101 b0 b3 b7 b11 b15 b19 b23 b27 b31 b35 b39 b43 b47 b51 b55 b59 b63 

Timestamps

Timestamp are encoded as a single byte comprised of bits indexed at b56,51,42,37,28,23,14,9 from the basekey. In this example, the timestamp is 01100010 or 0x62 or 98.
The timestamp represents the number of fortnights that have passed since Christmas Day, 2000 Eastern time, modulo 256 to fit in one byte. For example, 98 fortnights places the code at approximately October 2004.
Stored as a single byte, there are 256 unique timestamps. This is 512 weeks or about 10 years. Yes, this means that a code's validity rotates approximately once every decade.
After the code's timestamp is read, it is subtracted from the current timestamp (generated from the system clock or network time server if available). The difference must be less than 2, so codes are valid for 4 weeks or about a month at a time.
Of note, Pillars of Garendall has a bug in which the modulo is not taken correctly, so the timestamp corresponding to 0xFF is valid without expiry.

Validity Check

The last three bits, b60-63, contain the sum of all other 3-bit chunks in the basekey, modulo 7. Without the correct number in these bits, the result will be considered invalid.
To this point, we have covered sufficient material to renew licenses. The timestamp can be changed, the last three bits updated, the result XOR'd with the userkey, and finally, the code converted from binary to text.

Factors for Basekey Generation

I was next curious about code generation. For the purposes of this write-up, I have not fully reverse engineered the basekey, only duplicated the aspects which are used for validation. This yields functional keys, just not genuine ones. If the authors of the EV: Nova renewal bot have fully reversed the algorithm, perhaps they will one day share the steps to genuine basekey creation.
One aspect validated by the registration app is that the licensee name, number, and game name can be modified to yield a set of base factors. These are then multiplied by some number and written into the basekey. We do not need the whole algorithm; we simply must check that the corresponding regions in the basekey are multiples of the appropriate factors.
The regions of note in the basekey are f1 = b5-9,47-51,33-37,19-23, f2 = b43-47,29-33,15-19,57-61, and f3 = b24-28,10-14,52-56,38-42. The top 5 bits and f3 are never actually checked, so they can be ignored.
Considering f1 and f2, the values in the sample basekey are 0x25DA and 0x1500, respectively. The base factors are 0x26 and 0x1C, which are multiples by 0xFF and 0xC0, respectively.
Rather than analyze the code in detail, I wrote a small script to translate over the disassembled PPC to Python wholesale. It is sufficient for generating keys to EV: Nova, using the perfectly-valid multiple of 1x, but I have found it fails for other v2 products.

Scripts

Here is a Python implementation for v2: aswreg_v2.py and aswreg_v2core.py
With bitstring installed, you can renew codes like python aswreg_v2.py renew "L4B5-9HJ5-P3NB" "Barbara Kloeppel" 1 "EV Nova" (just sample syntax, blacklisted codes will still fail in the app). There's also a function to check a code's timestamp with date or create a new license with generate.
As earlier cautioned, generating basekeys relies on code copied from disassembled PPC and will likely not work outside EV: Nova. In my tests with other v2 products, all essential parts of the algorithm remain the same, even the regions of the basekey which are checked as multiples of the factors. What differs is the actual calculation of base factors. Recall that these keys were created by Ambrosia outside the local registration system, so the only options are to copy the necessary chunks of code to make passable factors for each product or to fully reverse engineer the basekey algorithm. I've no doubt the factors are an easy computation once you know the algorithm, but code generation becomes less critical when renewal is an option for other games. I leave it to the authors of the Zeus renewal bot if they know how to find these factors more generally.
To renew codes for other games, keep in mind the name must be correct. For instance, Pillars of Garendall is called "Garendall" internally. You can find a game's name by typing a gibberish license in the registration app and seeing what file is created in Preferences. It should be of the form License.
Finally, a couple disclaimers: I have only tested with a handful of keys, so my interpretations and implementations may not be completely correct. YMMV. Furthermore, these code snippets are posted as an interesting case study about how a defunct company once chose to combat software piracy, not to promote piracy. Had Ambrosia remained operational, I'm sure we would have seen a v3 registration system or a move to online-based play as so many other games are doing today, but I hope this has been helpful for those who still wish to revisit their favorite Ambrosia classics.
submitted by asw_anon to evnova [link] [comments]

boolean

Boolean data type

From Wikipedia, the free encyclopedia Jump to navigation Jump to search
In computer science, the Boolean data type is a data type that has one of two possible values (usually denoted true and false) which is intended to represent the two truth values of logic and Boolean algebra. It is named after George Boole, who first defined an algebraic system of logic in the mid 19th century. The Boolean data type is primarily associated with conditional) statements, which allow different actions by changing control flow depending on whether a programmer-specified Boolean condition evaluates to true or false. It is a special case of a more general logical data type (see probabilistic logic)—logic doesn't always need to be Boolean.

Contents


Generalities

In programming languages with a built-in Boolean data type, such as Pascal) and Java), the comparison operators such as > and ≠ are usually defined to return a Boolean value. Conditional and iterative commands may be defined to test Boolean-valued expressions.
Languages with no explicit Boolean data type, like C90 and Lisp), may still represent truth values by some other data type. Common Lisp uses an empty list for false, and any other value for true. The C programming language uses an integer) type, where relational expressions like i > j and logical expressions connected by && and || are defined to have value 1 if true and 0 if false, whereas the test parts of if , while , for , etc., treat any non-zero value as true.[1][2] Indeed, a Boolean variable may be regarded (and implemented) as a numerical variable with one binary digit (bit), which can store only two values. The implementation of Booleans in computers are most likely represented as a full word), rather than a bit; this is usually due to the ways computers transfer blocks of information.
Most programming languages, even those with no explicit Boolean type, have support for Boolean algebraic operations such as conjunction (AND , & , * ), disjunction (OR , | , + ), equivalence (EQV , = , == ), exclusive or/non-equivalence (XOR , NEQV , ^ , != ), and negation (NOT , ~ , ! ).
In some languages, like Ruby), Smalltalk, and Alice) the true and false values belong to separate classes), i.e., True and False , respectively, so there is no one Boolean type.
In SQL, which uses a three-valued logic for explicit comparisons because of its special treatment of Nulls), the Boolean data type (introduced in SQL:1999) is also defined to include more than two truth values, so that SQL Booleans can store all logical values resulting from the evaluation of predicates in SQL. A column of Boolean type can also be restricted to just TRUE and FALSE though.

ALGOL and the built-in boolean type

One of the earliest programming languages to provide an explicit boolean data type is ALGOL 60 (1960) with values true and false and logical operators denoted by symbols ' ∧ {\displaystyle \wedge } 📷' (and), ' ∨ {\displaystyle \vee } 📷' (or), ' ⊃ {\displaystyle \supset } 📷' (implies), ' ≡ {\displaystyle \equiv } 📷' (equivalence), and ' ¬ {\displaystyle \neg } 📷' (not). Due to input device and character set limits on many computers of the time, however, most compilers used alternative representations for many of the operators, such as AND or 'AND' .
This approach with boolean as a built-in (either primitive or otherwise predefined) data type was adopted by many later programming languages, such as Simula 67 (1967), ALGOL 68 (1970),[3] Pascal) (1970), Ada) (1980), Java) (1995), and C#) (2000), among others.

Fortran

The first version of FORTRAN (1957) and its successor FORTRAN II (1958) have no logical values or operations; even the conditional IF statement takes an arithmetic expression and branches to one of three locations according to its sign; see arithmetic IF. FORTRAN IV (1962), however, follows the ALGOL 60 example by providing a Boolean data type (LOGICAL ), truth literals (.TRUE. and .FALSE. ), Boolean-valued numeric comparison operators (.EQ. , .GT. , etc.), and logical operators (.NOT. , .AND. , .OR. ). In FORMAT statements, a specific format descriptor ('L ') is provided for the parsing or formatting of logical values.[4]

Lisp and Scheme

The language Lisp) (1958) never had a built-in Boolean data type. Instead, conditional constructs like cond assume that the logical value false is represented by the empty list () , which is defined to be the same as the special atom nil or NIL ; whereas any other s-expression is interpreted as true. For convenience, most modern dialects of Lisp predefine the atom t to have value t , so that t can be used as a mnemonic notation for true.
This approach (any value can be used as a Boolean value) was retained in most Lisp dialects (Common Lisp, Scheme), Emacs Lisp), and similar models were adopted by many scripting languages, even ones having a distinct Boolean type or Boolean values; although which values are interpreted as false and which are true vary from language to language. In Scheme, for example, the false value is an atom distinct from the empty list, so the latter is interpreted as true.

Pascal, Ada, and Haskell

The language Pascal) (1970) introduced the concept of programmer-defined enumerated types. A built-in Boolean data type was then provided as a predefined enumerated type with values FALSE and TRUE . By definition, all comparisons, logical operations, and conditional statements applied to and/or yielded Boolean values. Otherwise, the Boolean type had all the facilities which were available for enumerated types in general, such as ordering and use as indices. In contrast, converting between Boolean s and integers (or any other types) still required explicit tests or function calls, as in ALGOL 60. This approach (Boolean is an enumerated type) was adopted by most later languages which had enumerated types, such as Modula, Ada), and Haskell).

C, C++, Objective-C, AWK

Initial implementations of the language C) (1972) provided no Boolean type, and to this day Boolean values are commonly represented by integers (int s) in C programs. The comparison operators (> , == , etc.) are defined to return a signed integer (int ) result, either 0 (for false) or 1 (for true). Logical operators (&& , || , ! , etc.) and condition-testing statements (if , while ) assume that zero is false and all other values are true.
After enumerated types (enum s) were added to the American National Standards Institute version of C, ANSI C (1989), many C programmers got used to defining their own Boolean types as such, for readability reasons. However, enumerated types are equivalent to integers according to the language standards; so the effective identity between Booleans and integers is still valid for C programs.
Standard C) (since C99) provides a boolean type, called _Bool . By including the header stdbool.h , one can use the more intuitive name bool and the constants true and false . The language guarantees that any two true values will compare equal (which was impossible to achieve before the introduction of the type). Boolean values still behave as integers, can be stored in integer variables, and used anywhere integers would be valid, including in indexing, arithmetic, parsing, and formatting. This approach (Boolean values are just integers) has been retained in all later versions of C. Note, that this does not mean that any integer value can be stored in a boolean variable.
C++ has a separate Boolean data type bool , but with automatic conversions from scalar and pointer values that are very similar to those of C. This approach was adopted also by many later languages, especially by some scripting languages such as AWK.
Objective-C also has a separate Boolean data type BOOL , with possible values being YES or NO , equivalents of true and false respectively.[5] Also, in Objective-C compilers that support C99, C's _Bool type can be used, since Objective-C is a superset of C.

Perl and Lua

Perl has no boolean data type. Instead, any value can behave as boolean in boolean context (condition of if or while statement, argument of && or || , etc.). The number 0 , the strings "0" and "" , the empty list () , and the special value undef evaluate to false.[6] All else evaluates to true.
Lua) has a boolean data type, but non-boolean values can also behave as booleans. The non-value nil evaluates to false, whereas every other data type always evaluates to true, regardless of value.

Tcl

Tcl has no separate Boolean type. Like in C, the integers 0 (false) and 1 (true - in fact any nonzero integer) are used.[7]
Examples of coding:
set v 1 if { $v } { puts "V is 1 or true" }
The above will show "V is 1 or true" since the expression evaluates to '1'
set v "" if { $v } ....
The above will render an error as variable 'v' cannot be evaluated as '0' or '1'

Python, Ruby, and JavaScript

Python), from version 2.3 forward, has a bool type which is a subclass) of int , the standard integer type.[8] It has two possible values: True and False , which are special versions of 1 and 0 respectively and behave as such in arithmetic contexts. Also, a numeric value of zero (integer or fractional), the null value (None ), the empty string), and empty containers (i.e. lists), sets), etc.) are considered Boolean false; all other values are considered Boolean true by default.[9] Classes can define how their instances are treated in a Boolean context through the special method __nonzero__ (Python 2) or __bool__ (Python 3). For containers, __len__ (the special method for determining the length of containers) is used if the explicit Boolean conversion method is not defined.
In Ruby), in contrast, only nil (Ruby's null value) and a special false object are false, all else (including the integer 0 and empty arrays) is true.
In JavaScript, the empty string ("" ), null , undefined , NaN , +0, −0 and false [10] are sometimes called falsy (of which the complement) is truthy) to distinguish between strictly type-checked and coerced Booleans.[11] As opposed to Python, empty containers (arrays , Maps, Sets) are considered truthy. Languages such as PHP also use this approach.

Next Generation Shell

Next Generation Shell, has Bool type. It has two possible values: true and false . Bool is not interchangeable with Int and have to be converted explicitly if needed. When a Boolean value of an expression is needed (for example in if statement), Bool method is called. Bool method for built-in types is defined such that it returns false for a numeric value of zero, the null value, the empty string), empty containers (i.e. lists), sets), etc.), external processes that exited with non-zero exit code; for other values Bool returns true. Types for which Bool method is defined can be used in Boolean context. When evaluating an expression in Boolean context, If no appropriate Bool method is defined, an exception is thrown.

SQL

Main article: Null (SQL) § Comparisons with NULL and the three-valued logic (3VL)#Comparisonswith_NULL_and_the_three-valued_logic(3VL))
Booleans appear in SQL when a condition is needed, such as WHERE clause, in form of predicate which is produced by using operators such as comparison operators, IN operator, IS (NOT) NULL etc. However, apart from TRUE and FALSE, these operators can also yield a third state, called UNKNOWN, when comparison with NULL is made.
The treatment of boolean values differs between SQL systems.
For example, in Microsoft SQL Server, boolean value is not supported at all, neither as a standalone data type nor representable as an integer. It shows an error message "An expression of non-boolean type specified in a context where a condition is expected" if a column is directly used in the WHERE clause, e.g. SELECT a FROM t WHERE a , while statement such as SELECT column IS NOT NULL FROM t yields a syntax error. The BIT data type, which can only store integers 0 and 1 apart from NULL, is commonly used as a workaround to store Boolean values, but workarounds need to be used such as UPDATE t SET flag = IIF(col IS NOT NULL, 1, 0) WHERE flag = 0 to convert between the integer and boolean expression.
In PostgreSQL, there is a distinct BOOLEAN type as in the standard[12] which allows predicates to be stored directly into a BOOLEAN column, and allows using a BOOLEAN column directly as a predicate in WHERE clause.
In MySQL, BOOLEAN is treated as an alias as TINYINT(1)[13], TRUE is the same as integer 1 and FALSE is the same is integer 0.[14], and treats any non-zero integer as true when evaluating conditions.
The SQL92 standard introduced IS (NOT) TRUE, IS (NOT) FALSE, IS (NOT) UNKNOWN operators which evaluate a predicate, which predated the introduction of boolean type in SQL:1999
The SQL:1999 standard introduced a BOOLEAN data type as an optional feature (T031). When restricted by a NOT NULL constraint, a SQL BOOLEAN behaves like Booleans in other languages, which can store only TRUE and FALSE values. However, if it is nullable, which is the default like all other SQL data types, it can have the special null) value also. Although the SQL standard defines three literals) for the BOOLEAN type – TRUE, FALSE, and UNKNOWN – it also says that the NULL BOOLEAN and UNKNOWN "may be used interchangeably to mean exactly the same thing".[15][16] This has caused some controversy because the identification subjects UNKNOWN to the equality comparison rules for NULL. More precisely UNKNOWN = UNKNOWN is not TRUE but UNKNOWN/NULL.[17] As of 2012 few major SQL systems implement the T031 feature.[18] Firebird and PostgreSQL are notable exceptions, although PostgreSQL implements no UNKNOWN literal; NULL can be used instead.[19]

See also

Data typesUninterpreted
Numeric
Pointer)
Text
Composite
Other
Related topics

References


  1. "PostgreSQL: Documentation: 10: 8.6. Boolean Type". www.postgresql.org. Archived from the original on 9 March 2018. Retrieved 1 May 2018.
Categories:

Navigation menu


Search


Interaction



Tools



Print/export


Languages


Edit links


submitted by finnarfish to copypasta [link] [comments]

Reddcoin (RDD) 02/20 Progress Report - Core Wallet v3.1 Evolution & PoSV v2 - Commits & More Commits to v3.1! (Bitcoin Core 0.10, MacOS Catalina, QT Enhanced Speed and Security and more!)

Reddcoin (RDD) Core Dev Team Informal Progress Report, Feb 2020 - As any blockchain or software expert will confirm, the hardest part of making successful progress in blockchain and crypto is invisible to most users. As developers, the Reddcoin Core team relies on internal experts like John Nash, contributors offering their own code improvements to our repos (which we would love to see more of!) and especially upstream commits from experts working on open source projects like Bitcoin itself. We'd like tothank each and everyone who's hard work has contributed to this progress.
As part of Reddcoin's evolution, and in order to include required security fixes, speed improvements that are long overdue, the team has up to this point incorporated the following code commits since our last v3.0.1 public release. In attempting to solve the relatively minor font display issue with MacOS Catalina, we uncovered a complicated interweaving of updates between Reddcoin Core, QT software, MacOS SDK, Bitcoin Core and related libraries and dependencies that mandated we take a holistic approach to both solve the Catalina display problem, but in doing so, prepare a more streamlined overall build and test system, allowing the team to roll out more frequent and more secure updates in the future. And also to include some badly needed fixes in the current version of Core, which we have tentatively labeled Reddcoin Core Wallet v3.1.
Note: As indicated below, v3.1 is NOT YET AVAILABLE FOR DOWNLOAD BY PUBLIC. We wil advise when it is.
The new v3.1 version should be ready for internal QA and build testing by the end of this week, with luck, and will be turned over to the public shortly thereafter once testing has proven no unexpected issues have been introduced. We know the delay has been a bit extended for our ReddHead MacOS Catalina stakers, and we hope to have them all aboard soon. We have moved with all possible speed while attempting to incorproate all the required work, testing, and ensuring security and safety for our ReddHeads.
Which leads us to: PoSV v2 activation and the supermajority on Mainnet at the time of this writing has reached 5625/9000 blocks or 62.5%. We have progressed quite well and without any reported user issues since release, but we need all of the community to participate! This activation, much like the funding mechanisms currently being debated by BCH and others, and employed by DASH, will mean not only a catalyst for Reddcoin but ensure it's future by providing funding for the dev team. As a personal plea from the team, please help us support the PoSV v2 activation by staking your RDD, no matter how large or small your amount of stake.
Every block and every RDD counts, and if you don't know how, we'll teach you! Live chat is fun as well as providing tech support you can trust from devs and community ReddHead members. Join us today in staking and online and collect some RDD "rain" from users and devs alike!
If you're holding Reddcoin and not staking, or you haven't upgraded your v2.x wallet to v3.0.1 (current release), we need you to help achieve consensus and activate PoSV v2! For details, see the pinned message here or our website or medium channel. Upgrade is simple and takes moments; if you're nervous or unsure, we're here to help live in Telegram or Discord, as well as other chat programs. See our website for links.
Look for more updates shortly as our long-anticipated Reddcoin Payment Gateway and Merchant Services API come online with point-of-sale support, as we announce the cross-crypto-project Aussie firefighter fundraiser program, as well as a comprehensive update to our development roadmap and more.
Work has restarted on ReddID and multiple initiatives are underway to begin educating and sharing information about ReddID, what it is, and how to use it, as we approach a releasable ReddID product. We enthusiastically encourage anyone interested in working to bring these efforts to life, whether writers, UX/UI experts, big data analysts, graphic artists, coders, front-end, back-end, AI, DevOps, the Reddcoin Core dev team is growing, and there's more opportunity and work than ever!
Bring your talents to a community and dev team that truly appreciates it, and share the Reddcoin Love!
And now, lots of commits. As v3.1 is not yet quite ready for public release, these commits have not been pushed publicly, but in the interests of sharing progress transparently, and including our ReddHead community in the process, see below for mind-numbing technical detail of work accomplished.
e5c143404 - - 2014-08-07 - Ross Nicoll - Changed LevelDB cursors to use scoped pointers to ensure destruction when going out of scope. *99a7dba2e - - 2014-08-15 - Cory Fields - tests: fix test-runner for osx. Closes ##4708 *8c667f1be - - 2014-08-15 - Cory Fields - build: add funcs.mk to the list of meta-depends *bcc1b2b2f - - 2014-08-15 - Cory Fields - depends: fix shasum on osx < 10.9 *54dac77d1 - - 2014-08-18 - Cory Fields - build: add option for reducing exports (v2) *6fb9611c0 - - 2014-08-16 - randy-waterhouse - build : fix CPPFLAGS for libbitcoin_cli *9958cc923 - - 2014-08-16 - randy-waterhouse - build: Add --with-utils (bitcoin-cli and bitcoin-tx, default=yes). Help string consistency tweaks. Target sanity check fix. *342aa98ea - - 2014-08-07 - Cory Fields - build: fix automake warnings about the use of INCLUDES *46db8ad51 - - 2020-02-18 - John Nash - build: add build.h to the correct target *a24de1e4c - - 2014-11-26 - Pavel Janík - Use complete path to include bitcoin-config.h. *fd8f506e5 - - 2014-08-04 - Wladimir J. van der Laan - qt: Demote ReportInvalidCertificate message to qDebug *f12aaf3b1 - - 2020-02-17 - John Nash - build: QT5 compiled with fPIC require fPIC to be enabled, fPIE is not enough *7a991b37e - - 2014-08-12 - Wladimir J. van der Laan - build: check for sys/prctl.h in the proper way *2cfa63a48 - - 2014-08-11 - Wladimir J. van der Laan - build: Add mention of --disable-wallet to bdb48 error messages *9aa580f04 - - 2014-07-23 - Cory Fields - depends: add shared dependency builder *8853d4645 - - 2014-08-08 - Philip Kaufmann - [Qt] move SubstituteFonts() above ToolTipToRichTextFilter *0c98e21db - - 2014-08-02 - Ross Nicoll - URLs containing a / after the address no longer cause parsing errors. *7baa77731 - - 2014-08-07 - ntrgn - Fixes ignored qt 4.8 codecs path on windows when configuring with --with-qt-libdir *2a3df4617 - - 2014-08-06 - Cory Fields - qt: fix unicode character display on osx when building with 10.7 sdk *71a36303d - - 2014-08-04 - Cory Fields - build: fix race in 'make deploy' for windows *077295498 - - 2014-08-04 - Cory Fields - build: Fix 'make deploy' when binaries haven't been built yet *ffdcc4d7d - - 2014-08-04 - Cory Fields - build: hook up qt translations for static osx packaging *25a7e9c90 - - 2014-08-04 - Cory Fields - build: add --with-qt-translationdir to configure for use with static qt *11cfcef37 - - 2014-08-04 - Cory Fields - build: teach macdeploy the -translations-dir argument, for use with static qt *4c4ae35b1 - - 2014-07-23 - Cory Fields - build: Find the proper xcb/pcre dependencies *942e77dd2 - - 2014-08-06 - Cory Fields - build: silence mingw fpic warning spew *e73e2b834 - - 2014-06-27 - Huang Le - Use async name resolving to improve net thread responsiveness *c88e76e8e - - 2014-07-23 - Cory Fields - build: don't let libtool insert rpath into binaries *18e14e11c - - 2014-08-05 - ntrgn - build: Fix windows configure when using --with-qt-libdir *bb92d65c4 - - 2014-07-31 - Cory Fields - test: don't let the port number exceed the legal