Skip to content

New resource data formats

2

Comments

  • HeadClotHeadClot Member Posts: 10
    I say look into glTF and .FBX

    More info on glTF can be found here -> https://github.com/KhronosGroup/glTF
  • TheUncertainManTheUncertainMan Member Posts: 49
    edited February 2018

    There is a proposal on the trello boards that people are voting for yet until this thread, it does not appear to have been discussed elsewhere. Not only that but I do not recall a thread anywhere on these forums where all of these have even been proposed together. This is the proposal -

    New resource data formats for higher performance, quality, and compatibility

    Art:

    - 32 bit PCM wav for SFX/voice sounds

    Now I question why on earth anyone would feel the need to have such an overly accurate representation of the sound of water droplets falling into a pool of water. You do realise that at 32 bits the level of accuracy is 65,536 times that of of CD music (16 bit Pulse Code Modulation). A level of accuracy that will be lost on by far the vast majority of people (is your PC/Mac hooked up to a Bang and Olufsen sound system?).

    - "open" resolutions for mp3 audio (128 - 320 CBR/VBR) in .mp3 or "lossless" formats

    This specific proposal needs to be expanded. Now I have no problem with the mp3 part (all right - there's no need to be sarcastic - get up and stop pretending you've fainted). After all NwN has used mp3 files from the get go. The only difference between a BMU file and an MP3 file of the same sounds is that a BMU file has eight extra bytes. These extra bytes are inserted at the very beginning of an mp3 file to change it into a bmu file. The actual bytes are the same in every case - "BMU V1.0" (just drag and drop any bmu file onto the VLC player and it treats it as what it essentially is - an mp3). Anyway, the reason I say this needs to expanded is that we need to know specifically which lossless format(s) we are talking about here if a meaningful discussion is to be had.

    I suggested this.

    I'm not really concerned with high quality water droplets or the fact that the accuracy might be lost on the vast majority of people.
    I paid good money for a microphone to record my voice acting, and I export it at Wav 32bit, unless someone comes along and asks for a different format.
    People may not appreciate the difference between 16bit and 32bit. They might not even notice the difference between Wav and MP3. But at least it's the highest quality it can be.

    I understand that its not needed. But since this is a remaster and we're pushing the boat out, asking for support for higher quality formats is pretty reasonable compared to the demand for better graphics.

    Post edited by TheUncertainMan on
  • Dark_AnsemDark_Ansem Member Posts: 992

    There is a proposal on the trello boards that people are voting for yet until this thread, it does not appear to have been discussed elsewhere. Not only that but I do not recall a thread anywhere on these forums where all of these have even been proposed together. This is the proposal -

    New resource data formats for higher performance, quality, and compatibility

    Art:

    - 32 bit PCM wav for SFX/voice sounds

    Now I question why on earth anyone would feel the need to have such an overly accurate representation of the sound of water droplets falling into a pool of water. You do realise that at 32 bits the level of accuracy is 65,536 times that of of CD music (16 bit Pulse Code Modulation). A level of accuracy that will be lost on by far the vast majority of people (is your PC/Mac hooked up to a Bang and Olufsen sound system?).

    - "open" resolutions for mp3 audio (128 - 320 CBR/VBR) in .mp3 or "lossless" formats

    This specific proposal needs to be expanded. Now I have no problem with the mp3 part (all right - there's no need to be sarcastic - get up and stop pretending you've fainted). After all NwN has used mp3 files from the get go. The only difference between a BMU file and an MP3 file of the same sounds is that a BMU file has eight extra bytes. These extra bytes are inserted at the very beginning of an mp3 file to change it into a bmu file. The actual bytes are the same in every case - "BMU V1.0" (just drag and drop any bmu file onto the VLC player and it treats it as what it essentially is - an mp3). Anyway, the reason I say this needs to expanded is that we need to know specifically which lossless format(s) we are talking about here if a meaningful discussion is to be had.

    I suggested this.

    I'm not really concerned with high quality water droplets or the fact that the accuracy might be lost on the vast majority of people.
    I paid good money for a microphone to record my voice acting, and I export it at Wav 32bit, unless someone comes along and asks for a different format.
    People may not appreciate the difference between 16bit and 32bit. They might not even notice the difference between Wav and MP3. But at least it's the highest quality it can be.

    I understand that its not needed. But since this is a remaster and we're pushing the boat out, asking for support for higher quality formats is pretty reasonable compared to the demand for better graphics.

    I can. I also second the idea of more formats. Maybe not lossless, but definitely OGG q10.
  • MartinusMartinus Member Posts: 3
    If there are no problems with looping audio clips in mp3 then nothing above 16 bit CBR 320 kbps makes no f sense. Let the devs focus on really important stuff instead of making artificial problems.
  • Dark_AnsemDark_Ansem Member Posts: 992
    Martinus said:

    If there are no problems with looping audio clips in mp3 then nothing above 16 bit CBR 320 kbps makes no f sense. Let the devs focus on really important stuff instead of making artificial problems.

    I don't know about that.
  • CameronpCameronp Member Posts: 1
    Re: DDS

    As I understand it .dds's have a few advantages over .png's their decompression is done on many graphics card inherently this means that the handling of texture assets is faster than it would be with other formats. That said there are .jpg decoders as well as .png decoders I believe they just cost more GPU resources.

    DDS also includes a mip chain that can be manipulated either through the .dds generating application or through hand modifying the chain. The mip chain also is required to implement something like Toskvig maps. You can also leverage the mip chain for a number of tricks to improve alpha test effectivness and detail map fading. That said much of this can be accomplished via shaders at a run time cost vs. precomputing.

    The other reason to consider newer dds is that the 3dc/BC5 compression that supports normal maps better I believe came out after the current NWN implementation (I think DX10 is it's introduction so 2006). Obviously with existing assets that is a minimal problem but if normal maps become standard for new CC then supporting better compression for them would be ideal.

    As far as editing, the nwndds can not be directly edited to my knowledge and must be converted into .tga's. If the additional support for the modern .dds format were added in addition to .tga/nwndds then the main difference would be that tools that decode the nwndds to .tga's would not work on the new .dds without updates. That said modern .dds are readily editable in photoshop as well as other applications so extracting them as a .dds would still allow you to edit them.

    Further thought I have no idea if the nwndds are linear space or gamma space, not a huge problem for older style content but as more advanced rendering features are added it starts to matter more. In the end most of the tech stuff may not mean much, if nobody is looking to push the engine to the point where these things matter.
  • RoadieRoadie Member Posts: 6
    My first thought for a 2DA and TLK alternative would be to use JSON. There's a billion different tools out there to edit JSON (including online), it can be edited with a plain text editor, and it means that tabs, line breaks, and whatever else are easy to include in text without worrying about implicit behavior in the format. It's slightly more verbose than a tab-separated/CSV style, but that means there's more room for future extensibility (e.g. new fields, etc).

    A quick example of what contents of a .2da.json file could look like:

    [ { "id": 1, "label": "Ambidex", "nameId": 204, "descriptionId": 222, "icon": "ife_ambidex", "prereqs": [{ "dex": 15 }], "gainMultiple": false, "effectsStack": false, "allClassesCanUse": false, "category": null, "maxCr": null, "spellId": null, "successor": null, "crValue": 1, "usesPerDay": null, "masterFeat": null, "targetSelf": null, "constant": "FEAT_AMBIDEXTERITY", "toolsCategories": 1, "hostileFeat": null, "reqAction": 1 }, { "id": 910, "label": "FEAT_EXTRA_SMITING", "nameId": 8782, "descriptionId": 8783, "icon": "ife_X2ExtrSmit", "prereqs": [{ "or": [{ "feat": 301 }, { "feat": 472 }] }], "gainMultiple": false, "effectsStack": false, "allClassesCanUse": true, "category": null, "maxCr": null, "spellId": null, "successor": null, "crValue": 0.2, "usesPerDay": null, "masterFeat": null, "targetSelf": null, "constant": "FEAT_EXTRA_SMITING", "toolsCategories": 5, "hostileFeat": null, "reqAction": true } ]
  • ProlericProleric Member Posts: 1,281
    I can't immediately see how such a massively verbose / redundant format is an improvement on 2da, which is compact, and can easily be edited in a spreadsheet. Remember, the most frequently edited files have thousands of entries.
  • TarotRedhandTarotRedhand Member Posts: 1,481
    If that is JSON I have to say it looks awfully inefficient compared to a 2da file. I have to agree with @Proleric about the wordiness of it. Not impressed. Looks to me to be only one step above a CSV file.

    TR
  • RoadieRoadie Member Posts: 6
    edited April 2018
    The point of the verbosity would be to allow arbitrary extensions of keys in the future, as well as to allow data compactness by just leaving keys out based on a default value (e.g. treat all missing keys as being an appropriate default value, which would allow a lot of items to only need the bare minimum of data).

    For example, imagine a future version adding a crValueMultiplier field, or different prereq options like "xor" or "not" (the latter for modeling a couple of the mutually exclusive feat lines that exist). A quick example of what I mean:

    "prereqs": [{ "not": { "feat": 333 } }, { "str": 18 }, { "or": [{ "dex": 18 }, { "con": 18 }] }, { "spell": 505 }]

    The non-fixed fields means that as long as you're sanely handling defaults, future versions can add all sorts of extensions to data functionality without requiring existing files be updated to work with it.
  • RoadieRoadie Member Posts: 6
    The big point of using JSON (or any of a variety of similar formats) would be future data extensibility. As long as defaults are handled sanely, future data functionality could be added in all kinds of way without requiring preexisting files to be updated to work.

    For example, arbitrary functionality could be added to "prereqs":
    "prereqs": [{ "not": { feat: 333 } }, { "str": 18 }, { "or": [{ "dex": 18 }, { "con": 18 }] }, { "spell": 505 }]

    And alternative fields could be added for new functionality:
    "maxCr": null "minCr": 8
    "usesPerDay": null "usesPerRound": 1
  • SherincallSherincall Member Posts: 387
    All the formats can be easily converted to JSON for editing, storage, transfer, diffing, etc, and then converted back to the game format for running. There's even official tools for that: https://github.com/niv/neverwinter_utils.nim (these tools are actually used to package the game).
  • TarotRedhandTarotRedhand Member Posts: 1,481
    edited April 2018
    2da files are easily extensible. You just add new columns. No JSON just looks like another in a long line of coding fashion to which you can add xml. Overly verbose substitutes for something that's not broken. Yes 2da's are just flat file database files in essence but it's not as though they are there to replace a relational database.

    TR
  • PlokPlok Member Posts: 106
    JSON files are cool. As far as I'm concerned JSON (JavaScript Object Notation) is the only good thing to come out of Javascript. I have yet to find a more terse and flexible way of encoding a document (or other kind of recursive structure).

    Unfortunately, they aren't really appropriate for 2das. The thing that I like about JSON - the recursiveness - is exactly why it shouldn't be used to 2das. A 2da is effectively a table in a relational database like MySQL. 2das/tables express recursion using relationships to other tables. Notice how the iprp_spells has a spell_index column? That's a foreign key that defines a relationship with spells.2da.

    Anyway, while we're on the subject of 2das; can we please have some method of having a hak APPEND rows to a 2da instead of just overwriting it? It would be great to have multiple unrelated haks do things like add custom feats or classes.
  • RifkinRifkin Member Posts: 141
    Plok said:


    Anyway, while we're on the subject of 2das; can we please have some method of having a hak APPEND rows to a 2da instead of just overwriting it? It would be great to have multiple unrelated haks do things like add custom feats or classes.

    An auto-collision detector and auto-reindexer? I like the sound of it.
  • Dark_AnsemDark_Ansem Member Posts: 992
    Rifkin said:

    Plok said:


    Anyway, while we're on the subject of 2das; can we please have some method of having a hak APPEND rows to a 2da instead of just overwriting it? It would be great to have multiple unrelated haks do things like add custom feats or classes.

    An auto-collision detector and auto-reindexer? I like the sound of it.
    Something like THIS? https://www.nexusmods.com/kotor/mods/2
  • PlokPlok Member Posts: 106
    @Rifkin It's fundamentally impossible to do because of scripting. You'll notice the KOTOR program @Dark_Ansem linked to pretty much says "you're on your own with scripts".

    You can detect collisions in 2das easily enough. The algorithm I came up with off the top of my head is; iterate over every row in every 2da file, grab any ids and references to those ids and note what file the 2da is in. With each conflict assign a new ID then go fix up the references to the conflicting ID from the same file. I'd imagine the linked program does something similar.

    That approach works because the relationships between the 2da rows can be inferred because they're in the same file. You can't really do that with scripts. Sure, scripts will probably be in the same file as the 2das they need, but actually working out which numbers are IDs is where you fall flat. If nwscript had an "ID" type to differentiate IDs from integers that would open up some possibilities but I'd imagine there'd still be edge cases where it doesn't work. For one thing, how to do you tell what it's an ID of?

    Really, the simplest solution is to not have conflicts in the first place. There's two fundamental approaches I can think of; make it mathmatically impossible to get conflicting IDs or define the relationship in the ID .

    The first approach is the most intutive. We already have a thing for this in the world of computer touching; the humble UUID (aka GUID). I don't like this approach and I can summarise why very simply; this is what a UUID looks like 123e4567-e89b-12d3-a456-426655440000. To expand on this, here's the pros and cons:

    Pros:
    • If people use it properly, you will NOT get a conflict
    • It's standardised and has support in all known programming languages
    Cons:
    • You can't hold that in short term memory (just look at it)
    • You have to use an external program to generate UUIDs (randomness is important)
    • People are quite likely to just go 12345678-1234-1234-112233445566 or some other predictable pattern (which breaks the "you're never gonna see a conflict 'till way past the heat death of the universe thing)
    • Mods that use this won't be backwards compatible - classic NWN will not know what to do with these
    • It's probably going to break scripts that iterate on IDs - there'll be cases where this will break existing mods

    The other approach - encoding the relationship into the ID - is the one I'm in favour of. Instead of having an ID, you have a namespace (programmer speak for a "group" or a "sandbox") of IDs and an ID within that namespace. Instead of Feat ID 12345 you have Feat ID 1 in namespace "my_awesome_mod_namespace". The EE then assigns a Feat ID in the global namespace (which may be 12345) and the scripts can do a lookup to get the global ID (with some extra functions to turn the namespaced ID into a global one).

    Pros:
    • The EE retains backwards compatibility with existing all NWN mods (including iteration of IDs)
    • Doesn't require an external program
    • Easy to memorise
    • Conflicts are easily corrected (it's a find and replace if your namespace is easily distinguishable e.g. "my_awesome_mod_namespace")
    Cons:
    • Conflicts are still possible (but much rarer and much easier to fix)
    • Mods that use this won't be backwards compatible - classic NWN will not know what to do with these
  • RifkinRifkin Member Posts: 141
    Plok said:

    @Rifkin It's fundamentally impossible to do because of scripting. You'll notice the KOTOR program @Dark_Ansem linked to pretty much says "you're on your own with scripts".

    You can detect collisions in 2das easily enough. The algorithm I came up with off the top of my head is; iterate over every row in every 2da file, grab any ids and references to those ids and note what file the 2da is in. With each conflict assign a new ID then go fix up the references to the conflicting ID from the same file. I'd imagine the linked program does something similar.

    That approach works because the relationships between the 2da rows can be inferred because they're in the same file. You can't really do that with scripts. Sure, scripts will probably be in the same file as the 2das they need, but actually working out which numbers are IDs is where you fall flat. If nwscript had an "ID" type to differentiate IDs from integers that would open up some possibilities but I'd imagine there'd still be edge cases where it doesn't work. For one thing, how to do you tell what it's an ID of?


    Yes, but even a list of Identifier's and what they have been updated/replaced with would be enough for most of us to trawl through scripts and correct the ID's. I know myself, I never hardcode integers, I always use an import with the ID references in it. So, for me I would be updating one file. Not really all that difficult.

    Of course, the other approaches you offered sounds nice, it still doesn't escape that at some point someone will have to manually fix all of their ID's if we were to change the way the 2da works.

    I do believe it would be possible to at least automate a conflict list, that would go through module scripts and provide you with potential issues/ID's to be corrected, similar to how subversion software provides conflict reports. You could choose on each instance what ID to change it to, or to leave it alone.
  • PlokPlok Member Posts: 106
    @Rifkin I think we're thinking cross-purpose here. My thinking is that Steam workshop is now a thing. People expect that if you install multiple mods from the workshop they'll just work. The main thing stopping this is 2das/tlks. Manually fixing mods to work together is fine for something like a persistent world server but it's not fine for people who just want to play a module with their friends.

    There really shouldn't be anything manual about having two mods play together. I can accept having to fix up scripting conflicts - there's not a lot that can be done about these because nwscript is a procedural programming lanugage - but manually merging data? There is no technical reason why that should be a thing.

    You're totally correct about people having to fix up their IDs. However this only has to be done once. It's also stupid easy; pull your 2da/tlk changes into new files, append a namespace to every row and you're golden. If you have scripts that use these 2das, wrap your constant assignments in a function call to get the non-namespaced ID (something like Get2daGlobalID(s2DA, sNamespace, nID) maybe). Job done. :)
  • FreshLemonBunFreshLemonBun Member Posts: 909
    It sounds overly complicated with no real benefit and an extra helping of overhead. I'm sure it satisfies some esoteric programmer paradigm but 2da and tlk references just need to work to facilitate extending the game's content in every direction. As it stands you can use overrides to provide the same asset bound "mods" seen in most other games, with incompatibilities when you modify the same asset with two different mods. Most of the issues come when you expand on the content, especially rules content, in which case you're already doing more advanced modding than most games provide facilities for.

    You could use string Get2DAString() if you want to make your own 2da lookup functions that aren't row ID bound, looping through the entire file until you get a match for example in the label column. You could have a custom 2da that maps prefixes to appended content packages, do a lookup on that, and then do a lookup on lets say feats.2da for a label column entry that matches prefix/namespace+label/id. I think for most large scale practical purposes that kind of method would be totally disregarded in favor of the built in method.

    You could also make a program that searches all of the 2da and tlk files and renumbers them so they don't conflict, going through all nested references to keep them consistent. Then search all of the associated scripts and update them any time a function like GetHasFeat is found or the variable of a spell id is checked for example. If you were really inclined you could probably dedicate 5 years of your life to such a pursuit I'm sure.

    I think a more appreciated change would be if Beamdog creates a more efficient form of 2da as they have said it's inefficient many times. So for example you might have the text version of a 2da and then you use a program that "compiles" it into a trimmed down version readable by the game.
  • PlokPlok Member Posts: 106
    edited May 2018

    It sounds overly complicated with no real benefit and an extra helping of overhead. I'm sure it satisfies some esoteric programmer paradigm but 2da and tlk references just need to work to facilitate extending the game's content in every direction. As it stands you can use overrides to provide the same asset bound "mods" seen in most other games, with incompatibilities when you modify the same asset with two different mods. Most of the issues come when you expand on the content, especially rules content, in which case you're already doing more advanced modding than most games provide facilities for.

    That esoteric programmer paradigm comment... that's harsh man :bawling: I can assure you this is a practical problem and not me just banging on about elegance.

    The problem I'm trying to solve is a simple one; two mods add a row to a core 2da (let's say classes.2da), they are totally independent aside from this (so they aren't massively expanding the core rules). If you use both mods at the same time this results in a best case of only one of the classes working and a worst case of all the scripts catching fire. How do we fix this? The answer is simple; allow appending rows to 2das.

    However, this introduces a problem; 2das and scripts reference rows inside 2das/tlks via the ID/row number. The ID/row number is either going to conflict or going to change depending on what other mods are installed. How do we fix this? Stop referencing 2da rows via ID/row number.

    The whole namespacing thing is basically that; giving the ability to reference a row by something other than an ID/row number. I picked it because it's conceptually simple and simple for people to actually use.

    You could use string Get2DAString() if you want to make your own 2da lookup functions that aren't row ID bound, looping through the entire file until you get a match for example in the label column. You could have a custom 2da that maps prefixes to appended content packages, do a lookup on that, and then do a lookup on lets say feats.2da for a label column entry that matches prefix/namespace+label/id. I think for most large scale practical purposes that kind of method would be totally disregarded in favor of the built in method.

    Making your own custom 2das doesn't work for core 2das like classes.2da. As previously stated, having two mods add a class or spell falls down. Everything inside a 2da that's used in character creation (classes, feats, spells, races, skills) is a problem that can't be worked around.

    Your solution would technically work but it's not really getting the essence of what I'm saying. I want to make NWN more ameniable to the idea of mixing and matching mods (and being able to make optional extensions to mods). No one in their right mind is going to go through all that to make their changes play nicely with others.

    What I want is a standardised way for mods to be able to reference 2da data that let's them all play nicely together. The key word there is standardised.

    You could also make a program that searches all of the 2da and tlk files and renumbers them so they don't conflict, going through all nested references to keep them consistent. Then search all of the associated scripts and update them any time a function like GetHasFeat is found or the variable of a spell id is checked for example. If you were really inclined you could probably dedicate 5 years of your life to such a pursuit I'm sure.

    Do you have a problem with me? The dismissive "esoteric programmer paradigm" comment above and this paragraph come across as being a tad passive aggressive.

    In reply to this; doing this for the 2das/tlks is doable. Doing it for scripts is halting-problem territory; it's pointless to try because it is unsolvable (there's not enough information in the scripts to do it).

    I think a more appreciated change would be if Beamdog creates a more efficient form of 2da as they have said it's inefficient many times. So for example you might have the text version of a 2da and then you use a program that "compiles" it into a trimmed down version readable by the game.

    I'm vehemently against binary formats. I'm totally for a better text format though.

    Having said that, it's pretty standard (and performant) to use SQLite for storing game data in games. It would actually fix a lot of the problems with 2das right off the bat if they were all stored in SQLite; you could just distribute your changes as a bunch of SQL statements in a .sql file.

    The more I think about it, the more it seems a very attractive idea. You could do things like make a small mod that changes Monk's BAB to fighter BAB (UPDATE classes SET AttackBonusTable = "cls_atk_1" WHERE Label = "Monk") or add some more spells to the Ranger (UPDATE spells SET Ranger = 3 WHERE Label = "Fear"). I mean, all the 2das are basically a relational database in everything but name only.
  • ProlericProleric Member Posts: 1,281
    A much simpler improvement would be the automation of 2DA file merges. The master version (top hak or default) could be modified by smaller files (same format, different extension) containing only new and replacement lines, in hak priority order.

    For example, the default 2da might be modified by the CEP additional lines, then by the module author's additions and replacements, so that the module had the last word.

    True, this would depend on line numbers, but that's not a problem if major projects reserve ranges. Worked very well in Dragon Age:Origins.
  • FreshLemonBunFreshLemonBun Member Posts: 909
    @Plok I just don't see the point in long and verbose discussions filled with programmer jargon about things that could be expressed in simple English in a few sentences. Things that also seem to satisfy some kind of idealism rather than a practical solution to a significant problem.

    NWN is good for modding because it is very simple, easy to use, easy for anyone to understand, and it allows things to be done relatively quickly with scripts. I think these kinds of complex solutions to small problems diminish those qualities. If you phrase it as "preventing overlapping 2da references for two distinct rules based mods" then it's easier to understand the only real benefit is saving ppl from themselves. Which isn't even worth it if you don't do the scripts too.

    I just see this sort of thing causing headaches and not really delivering anything new. For what it's worth there are a lot of tools on the vault that I look at and think "yeah... this is a programmer tool" and just skip them.

    With 2das you can open one up, look at it, figure out that the numbers are references, it's very simple. I copy a row from classes.2da, I paste it to the end, I update the row number, now I have a template for my own class. Perfect. What you're suggesting really complicates everything, maybe not for you but consider the entry level requirements just changed from copy/paste to namespacing and now making database queries.

    I prefer the approach suggested by @Proleric it's better to make an improved merging tool but you still don't resolve the scripting problem, nor the character file problem when row numbers have changed from what they're expected to be. Unless someone can automate a solution to that, which might take years, I don't think either approach is going to be used much. It's better to have a compatibility group that maintains and updates a database of ranges that a program can automatically update for participating modders, and awarding a "seal of compatibility" or something to participating groups.
  • ProlericProleric Member Posts: 1,281
    @FreshLemonBun for Dragon Age, we used the official wiki as a self-service compatibility tool for issuing ranges.

    One thing I would change is that individual modules were reserving ranges. Since you can only run one module at a time, a single User range would suffice. Unique ranges are only really needed for content packs and other global mods to the game.
  • FreshLemonBunFreshLemonBun Member Posts: 909
    I believe that for nwn2 they use the wiki for the same purpose and I'm not sure about nwn ee but I think large projects like the CEP use ranges reserved for larger projects. I think if the need is deemed as serious then self policing modders could organize a safer way to to keep collaborative ranges.

    On the other hand like you say if it's module specific or content you don't feel should be compatible with another project then you might not want to pad your 2das with unused data. For example if you want to override and replace all the rules or you want to deliver large packs of HD quality content made for NWN EE then you might not want to support outdated content.
  • PlokPlok Member Posts: 106
    @FreshLemonBun I hadn't actually thought about save files. I can see how to fix it pretty simply (just store the group name) but I think you're right that it would be too complicated at this point. I withdraw my request.

    Things that also seem to satisfy some kind of idealism rather than a practical solution to a significant problem.

    I'll admit there's some idealism with the grouping/namespacing, but the merging 2das is definitely practical. I'll explain the concrete problem I'm trying to fix with these suggestions.

    First some background:

    I'm working on updating the PRC. I've updated all of the tooling except for the character creator and am now taking a break from the mess of Java, C# and Make scripts that constitutes the PRC tools to do some actual honest to goodness nwscript coding.

    I'm currently working on modularising it all so that components (like psionics, tome of battle, invocations, etc.) can be removed/disabled without making the scripts even more of an unmaintainable mess. While I'm at it, I'm trying to make the PRC easier to extend and slot in with other NWN mods.

    Ideally I want to take it to a place where you can just install it via the Steam Workshop and it works - regardless of what other mods you have installed or anything like that (this is probably never going to happen though).

    Now my problem:

    I've got at least the seeds of a plan of action for everything except the 2das/tlks. I'm utterly stumped as to what to do with them.

    If I break the PRC up into say, 8 modules, thats 256 different permutations of modules being active or not active. So I need some way to merge 2das to work around this.

    I could do this via another custom tool - that would take me further away from the being a good steam citizen goal - but would it work? Sort of. It would really suck. It would suck to the tune of 5 minutes of .nss compilation to replace all the constants. So that's why I was wanting 2da merging. To not have to do that. Reserving ranges doesn't help here (well it sort of does, but you know).

    And now we get to the idealism - if I need 2da merging anyway, why not do it in a way that completely removes the problem of conflicts while we're at it? The Steam Workshop is gonna be conflict central so why not just make it not a thing? The core problem is simple enough; everything - even the game engine - exclusively uses IDs to reference 2da rows.

    So! My suggestion doesn't hold up. Anyone got any better suggestions?

    P.S:

    I just don't see the point in long and verbose discussions filled with programmer jargon about things that could be expressed in simple English in a few sentences.

    I'm justifying my suggestions and explaining my thinking. I can't promise it's 100%, but most of my posts follow a pattern of first paragraph "Here's a thing!" followed by a few paragraphs explaining why I like the thing. When I reply, I break up my responses to particular paragraphs. If I use jargon, I explain the concept the jargon is for. People who know the jargon can just skip the paragraphs explaining the jargon. I use it for peoples' convenience.

    In short what I am trying to do is have a constructive discussion. I want criticism. If my suggestions can't stand up to criticism they're bad suggestions and should be culled. I apply the same principles to other peoples posts, just so you know.

    Attacking my posts is fine and I encourage it. Attacking me personally via snarky, passive aggressive and/or dismissive comments is not fine and I would appreciate it if you'd stop. I really don't want to fall out with you.
  • FreshLemonBunFreshLemonBun Member Posts: 909
    What I said wasn't an insult and I apologize if you felt attacked by what I posted. Sometimes the subjects get too abstract and steeped in jargon and explanations which leave the essential points less clear than they could be. It's difficult to engage in a meaningful discussion if ppl delve too deep into professional programming talk or computer science which is so far removed from typical NWN modding.


    As for PRC the essential problems with it are that it makes some assumptions and does some work arounds which might not be desired by modders. Like NWN2 it seems NWN EE has a 255 limit on the number of class rows that can be displayed in game, unlike NWN2 it doesn't crash when you exceed them. A large pack like PRC doesn't have much room for extension, considering there are about 1000 official classes, and then numerous third party classes and some third party officially licensed classes. It seems similar to the CEP in some respects like how CEP uses other size limited 2das.

    I think it's better to make peace with how 2da references are made. Making a class pack is difficult enough as far as projects go, the way NWN2 modders "solved" this is by saying if their class pack is compatible with other class packs or not. If you change the new PRC too much it wont work properly with scripts or character files made for old PRC, so there are downsides to changing things to much.
  • ShadooowShadooow Member Posts: 402
    Sorry I didn't read all the posts here. I didn't think of any reasonable data format extension until now. Hopefully it wasn't already discussed.


    Multiple languages in single custom TLK file.

    Now, I am not 100% sure it isn't possible already, but assuming nobody knows if it is and I never seen this I suppose it is not.

    It would be great if we could make a custom TLK file for our module which would contain texts for more than one language and that client would handled it properly.
    Ie. if client uses german language, and server sent TLK line 169777226 then the client would try to read line 10 in custom tlk assigned for that module for german language by gender of current characterr gender. If the line wouldn't exists (for any gender) then it would try to read line 10 in custom TLK of english language.

    This would allow multilanguaged modules without need to provide multiple versions of custom TLK files, one for each language. Would made installation/downloading much simpler for user.
  • ShadooowShadooow Member Posts: 402
    edited May 2018
    Plok said:

    Anyway, while we're on the subject of 2das; can we please have some method of having a hak APPEND rows to a 2da instead of just overwriting it? It would be great to have multiple unrelated haks do things like add custom feats or classes.

    I was thinking about this a lot. I tried to do this myself using NWNX.

    The problem here is resource loading and the fact that this is mainly client side feature and NWN:EE doesn't support and won't support NWNCX.

    AFAIK on starting module, NWN first loads all files that applies in certain order. This means that after this process is done there is only one 2DA file available (the highest priority one) that is then opened and its values provided to the game.

    This is a major drawback and while I was thinking, ok so when game loads baseitems.2da I will load baseitemsx.2da and will overwrite values. But again only one baseitemsx.2da exists, thus only one "2da update" file is working at the same time.

    But what I did was quite trivial, basically my plugin works this way:

    When engine wants to load 2da, load new 2da with original_name+x.2da (unless original 2da name is too long in such case the last character is replaced by x instead).

    If this 2da exists, then for each row, read first column value. The first column holds information about line this row is supposed to go. Then copy all values in rest columns on this row into original 2da on specified row and collumn.

    2da snipped example:
    2DA V2.0 Row Label Name Plural Lower ... 0 60 Archmage 16822266 16822267 16822268 ...
    This will add archmage into any module using any haks. If a class exists on line 60 it will overwrite it, but not anything else.

    Ideally, it would be called 2da_name.2dx or something but I don't know how to handle non 2da files using nwnx.

    Not sure how much usefull is this when this will update each 2da only once by the highest priority 2da-update file. As I said, the resource loading is the biggest issue here.
    Post edited by Shadooow on
  • PlokPlok Member Posts: 106
    edited May 2018
    @Shadooow I've been busy so I've not been on the forums so sorry for not replying.

    That is really neat. It's also exactly the sort of thing I wanted; I don't need to be able to dynamically add rows to a 2da at run time, I just need to do it at load time.

    Now all it needs to be utterly perfect is a way of merging conflicts in a non-destructive manner. I honestly can't think of an approach that's even tractable without having some other way of referencing a 2da row besides row number/ID. Rewriting 2das to update the dependencies strikes me as a tractable (if quadratic) problem but scripts... yeah... Alan Turing would probably have things to say about that. ;)
Sign In or Register to comment.