Objects in C, IR, and ASM--how do you get low-level?: Getting rid of all the crap

Discussion in 'Programming General' started by Jimmy, Dec 25, 2015.

Objects in C, IR, and ASM--how do you get low-level?: Getting rid of all the crap
  1. Unread #1 - Dec 25, 2015 at 8:07 AM
  2. Jimmy
    Joined:
    Jun 24, 2008
    Posts:
    2,421
    Referrals:
    10
    Sythe Gold:
    25

    Jimmy Ghost
    Retired Sectional Moderator $5 USD Donor

    Objects in C, IR, and ASM--how do you get low-level?: Getting rid of all the crap

    Let's say I'm building an encryption algorithm. Fuck Microsoft, fuck Java, and fuck C++.

    If I want to build a pretty UI, I can write some "new" classes referencing the OpenGL or WxWidgets libraries--that's really easy, anybody can do that.

    But fuck that fucking shit. I WANT SPEED. I'm working on a compiler. Okay? Mostly for natural language processing. And optimization. AI.

    Manipulating raw memory is my only goal--networking comes in later using higher-level languages for whatever noobs adopt my framework. Basically, I want to re-implement the C/C++ standard data types using only those limited pieces of memory addressing that are absolutely necessary--no more goddamn fucking bullshit.

    Linked lists are too damn slow--I want arrays of contiguous memory, pointers only where absolutely needed. I've taken way too many basic/intermediate/advanced C++ classes at my university, I'm too good at it, and I completely hate it all: There's useless crap everywhere, very arbitrary design, ideas that are completely stupid and don't make any sense. Just tons of extraneous nonsense that makes programming really "easy" for "newcomers" but not at all good for anybody, least of all the end-user.

    How does one write secure code? What makes a database practically impenetrable? I have no fucking idea--I don't want to re-use a bunch of shitty algorithms written by the CIA that any NSA employee can reverse-engineer.

    I WANT SECURITY.

    I need to implement one structure: SecureHashTable

    I don't know where to start because I don't particularly understand hashing algorithms--what are the actual bit manipulations that make one hash "better" than another?

    Even just implementing a simple mathematical Set--really think about it for a moment.

    From Wikipedia: https://en.wikipedia.org/wiki/Set_(abstract_data_type)

    Code:
    equals(S1, S2)->hash(S1) = hash(S2)
    What the fuck does that statement even mean? It makes no sense to me philosophically. What is "equality" when the "values" of a set are arbitrary and the "equal sign" is a construct of human logic? Quantum theory would tell us, this is all fucking bullshit. I need a good metaphysician since this is basically just an incoherent rant by now: Consider Leibniz's Law.



    Fucking monadology.

    What is a function? It's just a map. So how do you implement a function using other functions? It's too much of a puzzle. I don't really get it.

    These are mathematical functions--so are we using discrete mathematics (integers) or continuous mathematics (calculus)? What is infinity? Nobody fucking knows.

    Circuit boards have analogue components (which measure physical quantities) and digital components (which perform logical operations on numbers).

    Mathematics is a priori true. Everything else--including electrodynamics--is just a model. This is weird for cryptography.

    How do you encode a "message" that you know is true into a device which has bus errors?

    All signals have loss and gain from ambient noise in the environment.

    How do pointers even work? Pointers are integers. So they store an integer in memory. But the mapping necessarily has to establish a separate address for the integer itself and for the memory: pointers can't store one value, they store two values.

    [variable x, address 0x04, value=0]
    int x = 0;

    [variable y, address 0x08, value=0x04]
    int* y = &x;

    [variable z, address 0x12, value=0x08]
    int** z = &y;

    See the problem? It's an infinite regress--why? How does one implement a simple linked list in Assembly?

    Code:
    struct Node { Node* other; };
    Node a;
    Node b;
    a->other = b;
    We have eax, ebx, ecx, edx

    x86 commands
    Code:
    mov--move value between registers
    add--add numbers
    cmp--compare two values
    test--bitwise AND (does not store)
    j___--jump based on the result
    call--call a function
    push/pop--push or pop off of the memory stack
    We have an instruction stack with a bottom and a top. But what defines the flow of time--user experience? It's a genetic algorithm, then, but what is the selection function?

    I have no idea.

    Apologies for the incomprehensible rant that's all over the place, but at the very least I'll hopefully be able to look back on this and make some more sense of what I'm talking about at some point.

    How does GOTO even work?
     
  3. Unread #2 - Dec 25, 2015 at 11:59 PM
  4. Sythe
    Joined:
    Apr 21, 2005
    Posts:
    8,071
    Referrals:
    461
    Sythe Gold:
    5,251
    Discord Unique ID:
    742989175824842802
    Discord Username:
    Sythe
    Dolan Duck Dolan Trump Supporting Business ???
    Poképedia
    Clefairy Jigglypuff
    Who did this to my freakin' car!
    Hell yeah boooi
    Tier 3 Prizebox Toast Wallet User
    I'm LAAAAAAAME Rust Player Mewtwo Mew Live Free or Die Poké Prizebox (42) Dat Boi

    Sythe Join our discord

    test

    Administrator Village Drunk

    Objects in C, IR, and ASM--how do you get low-level?: Getting rid of all the crap

    Too hung over to work on xen right now so I'll answer your questions:

    You can performance or you can have security, you can't optimize for both. There's always a trade-off.

    All that 'extraneous crap' is actually quite well designed (at least as far as C/C++ and the POSIX standard goes) and through today's optimizing compilers will generate far more efficient native code than you could ever write by hand.

    How do you write secure code? You don't. You leave that to cryptography experts like https://en.wikipedia.org/wiki/Daniel_J._Bernstein

    You could spend a lifetime studying side channel attacks (such as caching timing attacks) buffer overruns and specific flaws of one type or another in the numerous instruction sets and hardware implementations of these. This is what cryptography experts do for a living and they achieve excellence at it -- which you never will unless you dedicate your life to it. Not all of them work for the NSA. Djb is quite anti-NSA and has had his elliptic curve cryptography repeatedly rejected from NIST standards because they can't put backdoors in it -- it's too good. Another way you know he's your guy is he releases his algorithms into the public domain -- something only a pure researcher would do. He also sued the US government (and won) to protect his algorithms under free speech laws so they couldn't jail him for exporting munitions. There are people like him all over the world working to make computer security a reality.

    Mathematics is not a-priori. This is a made up ex-postfacto justification for it. If you research the history of mathematics you will learn about the foundational crisis and things like naive set theory. In fact mathematics is just an abstraction built from repeatable experiments performed in reality. It is part of the laws of physics. We only have the concept of numbers, counting things, sets etc. because of conservation of mass/energy and spatial exclusivity. Numbers are built by concatenating the logical operator and on spatially-exclusive materially-conserved quantities. Equality is a concept but it's not a baseless concept. Things can have similar or identical traits and we can isolate and signal our recognition of these similar traits. For example the same concentration of and can apply in two different places. We can say in short that the two places have the same number of the entity. We can abstract further and say two sets have the same cardinality.

    Quantum theory only works at the quantum level. Once you get to macroscopic "hot" systems the classical laws all apply. Some would argue (including myself) that classical laws also apply to the quantum world and in fact quantum is just a special case of the classical. For example this line of thought is embodied in the causal or pilot wave interpretations of quantum mechanics.

    You need to read some reality based philosophy rather than the self-referential infinite regression bullshit taught in mainstream universities. I suggest http://www.amazon.com/Introduction-Objectivist-Epistemology-Expanded-Edition/dp/0452010306

    Leibniz's law like most things can be sorted out by just calling things by their proper names. Identical things do not exist in reality due to the law of identity. A thing is itself, it is not also another different thing. Each thing has a single unique identity. When we talk about equality we are comparing a set of similar attributes. Comparison operations are perfectly legal and do not break the law of identity. If we look at post boxes which lack numbers we can say they are all identical in their construction and appearance, but we don't say they are the same postbox. They are made of different atoms and/or exist in difference places or times. If you differentiate concretes from abstractions, you can easily see that concretes follow the law of identity and abstractions don't -- however abstractions aren't real -- their only reality is a series of neurological signals that occur in your brain or a series of electrical or photonic signals that occur in a computer, each set of which is a unique concrete. The law of identity is thus unbroken in making comparisons. All you need to do is differentiate concretes from abstractions.

    A function is a recipe. It's not defined by other functions. It's not defined sets or collections or other mathematical constructs. It's defined in terms of the laws of reality. Causality, law of identity and conservation of mass energy is sufficient to define a function: If you perform the same operations on similar inputs you will achieve similar outputs.

    Yes we do. Infinity as a concept simply means to go forever and is useful for considering processes that you never intend to terminate. As an actual number (i.e. actual infinity) it is just an assertion brought into mathematics by the axiom of infinity. If you don't accept the axiom, it's not in mathematics. It's not a philosophical axiom (i.e. a statement you can't disprove without first assuming) it's actually just a pure assertion. "Infinity, as a number, exists." Axiom of infinity.

    Continuous mathematics is internally inconsistent. You've learned ZFC mathematics which has the conspicuous property of leaving out infinitesimals which provide a logical bridge between discrete and continuous models. This causes a problem because you can never move from one number to another at the fundamental level. If 1.999999 recurring is equal to 2 then a small amount less than 1.999999 recurring is equal to 1.999999 recurring which is still equal to two. Inductively then all numbers are equal to two. And further, whichever number you start with all numbers are equal to that number. This is one of many many internal contradictions in ZFC which you will find if you look for them. In order for mathematics to make sense you need make every actual thing in it finite as in https://en.wikipedia.org/wiki/Finitism This is what reality tell us. Mathematics as it stands has become divorced from reality that's why you have garbage like countable and uncountable actual infinities, etc. It's self referential bullshit that doesn't help you solve real problems. In physics we pick and choose mathematics to suit the problem at hand and we don't bother trying to untangle the mess modern mathematicians have made of it. For example in advanced quantum / perturbation theory we use dual numbers ( https://en.wikipedia.org/wiki/Dual_number ) which give us an infinitesimal to work with while still using the continuous approximation. In other branches we use discrete mathematics, etc.

    You use parity bits and checksums. You test your implementation rigorously. You build a conclusion about the device the same way we build conclusions about everything in reality. You test it X number of times and conclude that if X is large and it worked the first X times then it will work the X+1 th time. And the probability that it won't becomes vanishingly small. This is called inductive reasoning. Sometimes you can do better than this and prove mathematically (i.e. deductively) that it won't fail as well. However all deductive reasoning has as premises at some point along the chain of reason some inductively reasoned law (such as a law of physics).

    In applications where the risk of hardware/software failure is a great risk to human life, two competing computer systems produced by different teams on different hardware/software to the same set of specifications accept the same inputs and produce the same outputs which are compared and if they differ a graceful failure occurs. This happens in planes.

    You can isolate digital systems from ambient noise by Faraday shielding.


    A pointer is a memory address and a type: conceptually two pieces of information. When compiled it is just an address: one piece of information. Usually you point to some object or memory on the heap using a pointer stored on the stack. This is logically equivalent to storing a phone number in your address book. If the phone number changes you can always update it in your address book. It gives you a constant place to look for a shifting target. This is one of main fundamental reasons pointers are extensively used in programming. Algorithms rely on moving targets. In fact the processor itself has an instruction pointer which points to the native instruction it is up to. This is how goto works, it just updates the instruction pointer. The instruction pointer is kept in a register which again is a constant place to find a reference to a moving target.

    You need to stop trying to apply broken ZFC mathematics to computer science. They are fundamentally different fields. Mathematics gives us short cuts to certain computational results. However most computational results are irreducibly complex and can only be solved by putting it into a computer and doing the calculation. Or putting it into reality and doing the experiment -- the two are not different. What I am saying is that computation is directly part of reality, and every time you write and execute code you are performing a scientific experiment against a hypothesis: this code will do X.

    With this mathematical analysis of computing you are putting the cart before the horse. In some cases mathematics will tell you what to expect from your program however this is still just hypothesis forming. The computation itself is the primary, not the mathematics. Mathematics is a set of deductive reasoning and it is not a primary. Mathematics is built from experiment not the other way around.
     
< looking for EVT/SQL coder | Searching for Directions >

Users viewing this thread
1 guest


 
 
Adblock breaks this site