Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Sherwood Botsford

macrumors newbie
Original poster
Jan 19, 2018
3
0
###Introduction

Currently using Apple Aperture. Need a replacement.

I've been thinking a lot about photo management.

I'm starting to avoid the word 'DAM' as it increasingly refers to industrial sized software costing tens or hundreds of thousands of dollars. So let's look at what I mean by a photo manager:

**Browser** Pix initially come in in bunches, and as such they go somewhere in some folder structure on your computer. Many people will use some combination of Year/Month/ and string to describe event. Often remember that the shot took place on your trip to Italy, or that it was the Smith & Brown wedding is sufficient.

**Tagger** But what do you do when you are looking for the closeup of a butterfly. It was an incidental pic on some holiday, but which one. Now metadata comes into play. If all your holiday shots are tagged 'Holiday' and your program can search existing metadata, your problem is solved. Search for holiday and focal distance less than 5 feet. You still may have a bunch to wade through. If, in addition to some general keywords for the batch you add a few per image you have a big win. E.g. "Butterfly"

Tagging is hard. You want to tag it with multiple things. E.g:

* Describe the scene.
* Identify the location. GPS is fine, but "Lock Ness, Scotland" or Kensington market, Toronto, Ontario, Canada is easier to visualize.
* ID the people in the scene. Classify them more generally. (Woman with child; Young boy...)
* Describe the technical aspects -- close up, high/low key, lighting
* One or more classes of description about the scene -- weather, mood.
* Usage: Have you sold exclusive rights for this image? Exclusive for 18 months for a calendar?

These are *facets*. I prefer to go through a set of images several times, concentrating on one of these at a time. Sometimes a facet is irrelevant. Weather makes no sense for an interior shot. If you do facets, you need a way to search for images that don't have an entry for facet X. You also need a way to mark a facet as irrelevant.

Crudely you can implement these with constructs like WEA:Cloudy but then you still have to be able to search for images that don't have WEA:* as a keyword.
And you have to decide on what to do where it's not relevant. WEA:N/A

Having some kind of support for actual facets would be a big win.

**Searcher** No point in tagging if you can't search the data. Two programs I trialed, Mylio and Photoshop Supreme, had no provision to search exif data -- where the stuff like time of day, and focal length, and camera model is kept. One program allows you to search for only one tag at a time. I can search for Holiday. Or I can search for butterfly. But I can't search for shots that have both "Holiday" and "butterfly" Ideally you want full boolean search support with 'and,' 'or', & 'not', parentheses for grouping, and wild cards for partial matches.

**Version tracker** A photograph for a professional may have a long history. You often have a shot, then export it in some altered form (cropped, resized, sharpened, colour adjusted, watermarked) Nice to be able to find the original 5 years from now. One recommended practice I ran into had the following:

* Master image was Raw.
* Archive version was digital negative.
* Processing version was 16 bit tiff or PSD
* Delivered version was tiff or jpeg.

This requires a minimum of 4 versions. Add to that:

* Watermarked versions.
* Reduced resolution versions for web pages.
* Colour matched versions for specific printing environments.
* Cropped versions for mobile web pages.

So that's the base case. Implementations differ, and they refine this somewhat.

### Online resources

**Impulse Adventure** (site: https://www.impulseadventure.com/photo/)
Unfortunately out of date. But still several good articles.

* Catalogs and Multiple versions. https://www.impulseadventure.com/photo/flow-catalog-versions.html

* Important Features of Catalog Software. https://www.impulseadventure.com/photo/flow-catalog-features.html

**Controlled Vocabulary** (site: https://www.controlledvocabulary.com/ )

* Using Image Databases to Organize Image Collections http://www.controlledvocabulary.com/imagedatabases/

Also has a good forum/mailing list.


***






## Requirements:

The four functions above describe what it should do. Here are some more details about how it should do it.

### Server requirments
I can see implementing this in one of two ways: Either as a stand alone program or as a local web server. The latter has the advantage that it would scale for family or small photo business.

Cloud services are slow when you are talking about 10-12 Mbyte files. My network connect takes several seconds per MByte. Cloud services for metadata have to be well optimized -- you really don't want to issue 3000 keyword change requests individually when you change the spelling of a keyword. So:

* Not cloud based.
* Runs on Mac or on local apache web server.

### Keyword handling
* Fast keywording. Aperture allows drag and drop from a list, multiple sets of hotkeys for words used frequently, copy paste of keywords from one photo to another, and keywords organized in folders. It also allows search for a keyword, and a list matching what you typed so far appears. Other programs that have good keywording include IMatch and Photomechanic. One of the key aspects of this is to have multiple ways to do things.
I like aperture's multiple preset buttons -- combine with facets.

A *history* of keywords might help: A pane with the last N keywords in it. Chances are that the next word I use will be one of the last 20 I use about 80% of the time.

* Full access to standard metadata: EXIF, ITPC, subject to limits of the file format.
* Controlled vocabulary. I want an extra step to add a new keyword to my list of keywords. This helps with the the Sommer Vacashun problem.
* Hierarchial vocabulary. E.g. Separate entries for Birds -> raptors -> falcon and Planes -> fighters -> falcon. Parents are stored with keywords. Moving a keyword in the master list, or changing spelling, corrects all usage in photos. This can be done as a background task.
* Parent items are automatically entered as keywords. (With the correct database linkage, this comes free as a side effect of the point above.
* Synonyms -- I can define "Picea glauca" as a synonmym for "White Spruce" entering one, enters the other.
* Facets: For a set of pictures I want to be able to define a set of facets or categories for collections or folders. Facets would be things like: Weather; Who; Where; Ecosystem; Season; Lighting Not all collections would have all facets, but a collection having a facet would nag me to put it in. A facet would have a negation for not applicable (Weather isn't applicable inside a house; Who isn't applicable in a landscape shot.) Facets allow me to go through a collection in multiple passes and get the missing keywords.

### Searching
* Complex searches: Find all shots between 2012 and 2015 shot in December or January, shot with my Nikon D70, with keyword "snow" rating of 3 or better shot after 3pm in the day. (Yes, I do use searches like that)
* Saved Searches. These are the equivalent of smart albums in Aperture. As new pix meet the standards they would be shown.

### Version Tracking
* Version tracking If a lower resolution, cropped, photoshopped, composited or a black and white image is produced from a master, the system should show that it's a derived image, and allow access to the master. A master should be able to list derived images. Derived images are not linear but form a multi-branched tree.
* If my camera produces JPEG and Raw versions, I want the JPEG to be shown as being derived from the Raw version.
* Metadata applied to a master should propagate down to derived images.
* Some form of exception handling for this: e.g. -keyword to prevent a
people identifier being applied to an image where that person was
cropped out.
* Ability to track through external editing programs. E.g. If I edit a program in photoshop, it will mark the PSD file as being derived, restore as much of the metadata as the PSD format allows. If Photoshop is used to create a jpeg image, that too is tracked.

### Data robustness
* All metadata is indexed.
* Metadata is also written to sidecar files.
* Where possible metadata is written to the image file itself. (optional -- can stress automated backup systems)
* Through file system watching, name changes and directory reorganization are caught. Relevant sidecars are also renamed, and the database updated with new file location/name. Sidecar contents include the name of their master file.
* Should be possible to rebuild entire database from images + sidecars. Should be able to restore all file metadata from database. This requires a lot of under-the-hood time stamps to determine which has priority.
* All database actions should be logged and journaled, so they are reversible.
* Reasonable speed with catalogs of more than 100,000 images.
* Support for previews of all common image formats and most raw formats.
* Previews and thumbnails are treated as versions of the master. They inherit metadata.



### Nice to have:

* Simple non-destructive editing -- crop, brightness, contrast.
* Rating system
* Smart albums
* Drag and drop functionality with other mac apps.

Suggestions?

### Notes on current state of the art:

* Nothing I've found supports version tracking, especially through an external program. Lightroom and Aperture both support a type of versions -- different edits on same master, but at least Aperture doesn't copy metadata to a new version. Aperture supports Stacks -- a group of related pictures.
* Lightroom: Doesn't support PNG, very clunky interface, slow on large catalogs;
* Mylio home version doesn't support hierarchical keywords; doesn't index exif information, does not allow or syntax for searches,
* Photomechanic is fast for keywording and culling, but has very limited search capability.
* IMatch. Possible contender, Requires MS windows box.
* Photo Supreme: Erratic quirks. Crashes. One man shop. Can't search Exif in useful way.
* Fotostation: AFAIK no underlying database. Has to read metadata from images/sidecar files on startup. Slow after 10K images. (They have server based software too that is big bucks.)
* Luminar: A DAM has been promised Real Soon Now, but no demos, storyboards or feature lists have been published. There is a claim that it is in beta, but no one on their fairly active forum will admit to being part of the beta group.
* Affinity: Similar to Luminar.

###Commandline tools

Much of the special features for version tracking could be implemented with scripts using calls to these programs.

* ImageMagick -- good for whole-image conversions, also can read/write internal metadata and sidecars.
* Exiftool -- read/write exif data reads most makernotes.
* fswatch -- not really an image processor, but hooks into the operating system and can alert when files have changed -- modified, renamed, moved.



Enterprise level

* WebDAM No real information about capabilities on web site.
* Extensis. Expensive.
* Bynder. Joke program. Cloud based set of shoeboxes.
* WIDEN. Cloud only.
* Asset Bank. Starts at $500/month for up to 50 users.

### Metadata Storage

There are three places metadata can be stored:

* In the image.
* In a database.
* In a separate file for each image (sidecar file) Typically these files have the same name as the primary file, but a different suffix.

If at least some cataloging information is written to the image, then you can reconnect a file to your database. In principle this can be a single unique ID.

This saves you from:

* You moved or renamed an image file.
If you can write more info into the file -- keywords, captions -- then you are saved from:

* Your database is corrupted.
You upgraded your computer and your database program doesn't work there.

Sidecar files allow you to recover all your metadata if your database crashes.

***Downsides of storing data in the image***

Writing to the original files can corrupt the file. Most RAW formats are well understood enough now to at least identify and replace strings of metadata with the same length string. If you tell your camera to put the copyright string

Copyright 2018 J. Random Shutterbug Image XXXX-XXXX-XXXX-XXXX-XXXX-XXXX

Then as long as the DAM keeps that string the same length you are golden.

Keeping all metadata (or as much as you can) in the original images makes for very slow access. Your program has to read at least the first few blocks of every image. Depending on the file structure, adding too much data may require rewriting the entire file. Any program that moves the boundaries of data sub blocks better be well tested.

Writing data back is time consuming.

Some file formats don't have any metadata capability.

Some file formats (Photoshop PSD) are noted for mangling metadata.

A glitch during the write process can corrupt the image file. The alternative, writing a new file, then replacing the old file requires that the entire file be both read and written, rather than just a chunk of it. This has serious performance issues.

***Downsides of Databases***

Databases are fast, but they are blobby, and you are writing into the middle of blobs of data. If the implementation of the database is solid, there isn't much to worry about. But hard disks have errors, and a single error can make a database partially or fully unusable. Good database design has redundancy built in so that you can repair/rebuild.

Databases are frequently proprietary. Data may be compressed for speed. Getting your data out may be tricky. (Problem for people using Apple Aperture)

Databases frequently are optimized in different ways. In general robustness is gained at the cost of performance and complexity. One compromise is to write all changes first to a transaction file (fast...) and then a background process does the database update in the background. This slows down access some: Have to check both the main database and the transaction file, but unless the transaction file gets to be bigger than memory, this shouldn't be noticeable.

***Downsides of Sidecars***

You have to read a zillion files at startup.

If you do a batch change (Add the keyword "Italy" to all 3000 of your summer holiday trip shots) the catalog program has open, modify and write back 3000 files.

If you rename a file, and don't rename the sidecar file too, your meta data is no longer connected to your image.

### Best practice

Opinion only here: Sorry.

* You want a unique asset tag that resides in the image. This can be an actual tag like the copyright one mentioned above, or it can be a derived tag from information in the image. This could be the EXIF time stamp (Not unique -- multiple shots per second, multiple cameras.) If your program reads makernotes, the best one is Camera model + Camera serial number + timestamp + hundredths of a second.

* You want a database for speed. It, of course has the unique ID

* You want sidecars for rebuilding your database, and for data portability. They have the unique ID.

If the database crashes, it can be rebuild from the sidecars.

If a sidecar is corrupted, it can be rebuilt from the database.

If an image is renamed the ID can be used to reconnect it to the sidecar, and to fix the database.

To make this work, you have to use a lot of timestamps. If the sidecar is more recent than the latest time stamp in the database record, then the sidecar is the authoritative record.

You also have to have internal checks on data integrity. The record for an image (sidecar or database) needs a checksum to verify that that data isn't corrupt.

Given the relatively fragile nature of raw files, best practice is a system that only writes zero or once to the Raw file. This is why the exif time stamp + hundredths, copyright work well. You can include the camera model and serial number in the copyright so that now the copyright message is unique to the camera. At this point you have the ability to create, and recreate a unique ID for each image. If the DAM has the ability to modify the file, you can create this ID once. This saves some time if you ever have to rebuild the database.

Having as much of the metadata in the file as possible means taht it travels with the file. This is a win, but comes with the risk of potential corruption. Possibly the best strategy is to leave the original intact, and for clients who need raw data, either add metadata to a copy, or to a derived full data equivalent (e.g. DNG)

Sidecars don't need to be updated in real time. The slick way to do this would be that whenever the database makes a change to a record:

* Make a new record that duplicates the old record in the database.

* Make the change in the new record.
* New record is flagged, "not written to sidecar"
* Old record is marked "obsolete"
* Another thread writes the sidecar files out, writing out the new one, then deleting the old one (or renaming the new one to the old one's name).
* Periodically you run a cleanup on the database removing obsolete records older than X days. This gives you the ability to rollback changes.

This is not complete: It doesn't address the issue of non-destructive edits. Many programs now allow the creation of multiple images from the same master file, and do not create a new bitmap, but rather a file with a series of instructions for how to make the image from the master. AFAIK all such methods are proprietary. This results in a quandary as the apps that do a good job of tracking metadata may not be able to deal with the non-destructive edits. This can be critical if you crop a person out of an image, crop to emphasis a different aspect, and receive a different caption, etc.

The workaround is that you always write out a new bitmap image from a serious edit. Ideally you have a script that looks for new NDEs and writes out an image based on this, copying the metadata from the master and at some point bringing it up for review for mods to the metadata.



### Robustness against external programs.

I like having an underlying file structure organization. I like the idea that if I produce a bunch of cropped, watermarked, lower resolution, etcetera versions of an image that my catalog will track that too.

But if the underlying file structure is exposed to Explorer or Finder, then you have the risk of a file being renamed or moved, and the database is no longer in sync with your file system.

To budnip answers of the form "This is impossible" here's how to "Finder-proof" your image database.


* When an image is edited, a file system watcher notes that the file was opened. The file goes onto the 'watch' list. (the program fswatcher does this on mac. I use it to update my web page when my local copy has been edited.)

* When a new file appears in a monitored directory tree, it's noted.

* When a file is closed, this is also noted. If there has been a new file created it is checked for metadata. If the new file's metadata has a match for an existing file, then existing file metadata is used to repopulate missing data in the file. (Photoshop is notorious for not respecting all metadata.)

* Database is updated with the new file being marked as derivative of the original file.

* optionally a suffix may be added to the new file's image number, showing whether it derives directly from the original or from another derivative.

To make this work, the two components are a unique id that can be calcuated from the master, and a file system monitor program that catches create, move, change, and rename events.
 
Hi, sorry, I didn't read all of it.

You need to look at Adobe Bridge for a free asset manager solution and then something like Apple Photos, Lightroom or Capture OnePro for editing like you did in Aperture.

There are other tools like Affinity and Luminar and Dark table that you can look at too but this extensive list of requirements is out of scope for a number of tools.

A lot of your needs above would be met with a wider workflow process, not necessarily by a single tool.

Sorry can't be of more help.
 
holy cow, you need a tl;dr version

All I can say is if you're moving on from Aperture, then do what the majority of people do. Go with Lightroom. Many of us abhor the subscription licensing, I get that, I really do, but they are the industry leader for a reason. Lightroom provides the non-desctructive editing, DAM capability all rolled up into one app. No other app comes close to the features and abilities that LR has.

I didn't really read through the entire post, as it was too long, but if LR is good enough for most photographers, the odds of it meeting your needs is high.
 
Last edited:
  • Like
Reactions: kenoh
Hi, sorry, I didn't read all of it.

You need to look at Adobe Bridge for a free asset manager solution and then something like Apple Photos, Lightroom or Capture OnePro for editing like you did in Aperture.

There are other tools like Affinity and Luminar and Dark table that you can look at too but this extensive list of requirements is out of scope for a number of tools.

A lot of your needs above would be met with a wider workflow process, not necessarily by a single tool.

Sorry can't be of more help.

Bridge doesn't keep a database. So every time you change folders it has to re-read all the metadata. For this reason it doesn't handle searches well. Try using bridge with 50,000 images.

I agree that a wider workflow helps, but that means that the robustness against external file operations is a 'must have' for the database.

One model I've looked at:
* Use photomechanic for tagging. It's superb at that. It writes into the files, and understands controlled vocabularies.
* Use exiftool to extract metadata for the database. Scripting could automate synchronization between database and sidecar files.
* use mysql as the database, and use command line tools to update it from changes in the file system.
* Use Affinity/Luminar/Photoshop/Lightroom as image editors.
* Use fswatch to monitor the file system

Don't have a good candidate for the browser searcher function. The search has to be able to read the database, OR the database must have a set of command line tools that can also access it.

This approach requires a fair amount of scripting, and is still likely to be somewhat fragile.
[doublepost=1535981892][/doublepost]
holy cow, you need a tl;dr version

The long list is what a DAM ought to do. And having it in a single package is helpful. Would you want a word processor that required you compose text in one application, formatted in another, and you needed a third to create tables and footnotes, and a fourth for indexes, and a fifth for printing?

.

The first 10 paragraphs *do* give a summary. But, yeah. I hear you.
 
Last edited:
The first 10 paragraphs *do* give a summary
I'm sorry, but 10 paragraphs is not a summary :eek:

One model I've looked at:
* Use photomechanic for tagging. It's superb at that. It writes into the files, and understands controlled vocabularies.
* Use exiftool to extract metadata for the database. Scripting could automate synchronization between database and sidecar files.
* use mysql as the database, and use command line tools to update it from changes in the file system.
* Use Affinity/Luminar/Photoshop/Lightroom as image editors.
* Use fswatch to monitor the file system

For me, that's way too much work, I mean you'll be spending all your time managing this, vs. taking pictures. Get a tool that doesn't require a ton of work, and doesn't get in your way.
 
I'm sorry, but 10 paragraphs is not a summary :eek:



For me, that's way too much work, I mean you'll be spending all your time managing this, vs. taking pictures. Get a tool that doesn't require a ton of work, and doesn't get in your way.

I know what you mean. But a couple months ago, I spent a day looking for a picture that I had taken -- a rose hip covered in hoar frost taken just as the sun broke through.

I had a pic used on a web page. Wanted to reuse it in a slide presentation, but for that I needed a higher res version. Could I find the original? No.

I have scripts that produce lower res versions for web pages. I need to be able to associate those images with the master.

People give me updated metadata ('2nd guy from the left is Steve, not mike') I want to enter that once and have that change move to all copies.

That's why I'm trying to do it all in one package, or at worst 1 package + external editor. I want this to just happen, and not be tied to remembering to run scripts, and tweak things.

Too many photographers have a library they don't revisit. They take pix, maybe fix up, and move on, rarely revisiting, reusing, sharing those images. I really don't understand why people aren't looking at this and saying, "Yeah, why *don't* DAMs do all this stuff."
 
I’m not a pro but why do you need to keep four or more copies of a single photo? I keep the base raw file and a finished, full res jpg. If in two years I need a small res version of photo ABC, I just go into LR and export a one off. Metadata gets stored in the raw file before exporting as a final jpg. Yes of course you might need to add or change later but you can always use the paint bucket tool or just find the copies if you’ve named them similarly and highlight all and then change the metadata.

I agree with @maflynn that you are spending more time managing than enjoying the process.
 
Last edited:
  • Like
Reactions: kenoh and maflynn
As someone else who is reluctantly saying a long goodbye to Aperture, I, too, have been on the hunt for the Perfect Software that will provide me with intuitive, easy interface for editing images and also an intuitive, easy, seamless solution to setting up and maintaining a library/catalog which I can search quickly and immediately locate any given image..... Haven't found it yet! I've tried various programs in their trial versions (still am doing so for a couple) and have purchased but am still struggling to learn all the ins-and-outs of Capture One Pro for Sony. I read the lengthy, but well-thought-out first post with interest, hoping to find some answers which will help me in my own quest, too.....

I am someone who much, much more prefers the process of actually shooting the photos as opposed to dealing with them afterward in the computer. I am not keen on retouching and tend to want to spend as little time as possible in the editing process, hopefully having accomplished getting the shot pretty close to "right" in the camera in the first place.

For editing so far I am rather pleased with Luminar, and also I've found a couple of good RAW Viewers (another problem I ran into this summer was needing to find a program to convert the RAW images from my new Sony RX100 M6, as at first there was only Sony's Imaging Edge and nothing else), but I'm still looking for "the one" program which will magically do it all, from viewing my RAW images and quickly culling them to allowing for hierarchical structure in my keyboarding and catalog (I want a catalog that I can drill down from "Birds" > "Water Birds" >"Wading Birds" > "Egrets" or "Birds" > "Water Birds" > "Ducks" and so on...... Pretty specific!). From there, stepping right into image editing, completing that and exporting to the final destination and -- done! Six months later, I remember a particular photo of an egret that I want to look at again and run a quick keyword search in the DAM and voila, there it is! In the past I've shot a lot of birds, so really just having "Birds" as the keyword isn't going to cut the mustard for me.

Actually, this will be my first experience with setting up and using a DAM, as in years past I never bothered to do so, much to my regret now. I saw this in spades when working on my archiving project this past spring! Anyway, I am determined to do things right this time.....

Years ago I used Adobe's Photoshop, but never Lightroom, as I got started with Aperture and that was that. Aperture and Lightroom both debuted around the same period of time, and although I tried Lightroom, I much preferred Aperture. Eventually I found I was using Aperture pretty much exclusively and ignoring Photoshop, so never bothered going beyond CS3 and when I got a new computer just never even attempted to install CS3 on it. Now that Adobe has their subscription model I have been even less inclined to use it, especially since for the past several years I had kind of put photography on the back burner and wasn't shooting many photos. Now that I'm becoming more active again, at some point I may revisit the idea of Photoshop and Lightroom.....

In the meantime, though, I do like Luminar 2018 quite a lot for editing and I am curious to see what they will come up with in terms of a DAM, which they have been promising for rather a long time. I used Photo Mechanic a long time ago, too, and really liked it, so that is another program I am likely to consider again, as it is speedy for reviewing/culling images. This past spring I began working on a project archiving some of my older images and would love to nail down the right program to use with it as I resume where I left off so that I can incorporate all the images, both older and current, into a catalog/library of some sort.....
 
Last edited:
  • Like
Reactions: xoAnna
Sorry, but Lr would tick most all the boxes you cite (maybe...too many IMHO), along with maybe Photo Mechanic (which, in large part, is designed to work with Lr).

For example, you cite this:

I know what you mean. But a couple months ago, I spent a day looking for a picture that I had taken -- a rose hip covered in hoar frost taken just as the sun broke through.

I had a pic used on a web page. Wanted to reuse it in a slide presentation, but for that I needed a higher res version. Could I find the original? No.

I have scripts that produce lower res versions for web pages. I need to be able to associate those images with the master.

People give me updated metadata ('2nd guy from the left is Steve, not mike') I want to enter that once and have that change move to all copies.

In Lr (and other similar managers) the versions are always linked to the originals. So finding them is trivial. In Lr even my externally exported files are linked via a plugin. Publishing via Lr also means images like those on Flickr, etc are always linked to the originals. Copying and synching metadata then becomes pretty easy. Lr can also tell if an external program has altered the metadata, and you can act accordingly for images it references.

Also, in Lr the metadata can be stored in the database. So you don't have the problem you cite of having to write to thousands of files.

Checksums for data integrity? convert to DNG.

Rebuild a database from sidecars? no. A regular backup with suffice (I've had a product or two that can do essentially this, rebuilding from sidecars, and it's a nightmare I don't wanna repeat).

I got lost at the "facets" thing. That's just standard IPTC data, and folks have been dealing with that for ages. What isn't explicity part of a name space (locations, scene, etc) can be dealt with flexibly with hierarchical keywords. One need not reinvent the wheel. For example, I reverse geocode my images in Lr so I've got info like city without having to do much more. And Lr keeps a list of recent keywords, and you can have displayed sets (I have them for competition categories, lighting parameters, publishing or printing parameters, etc).

Where Lr falls down Photo Mechanic helps, esp with locations, saved location and keyword sets, and controlled vocabulary. You can load up press lists for the IPTC scene category, for example, and apply those in PM during a batch ingest. Then Lr will have them and you proceed from there.

TL;DR: Lr can do much of what you want and more. As much as Aperture, and more if you add on some plugins, and use Photo Mechanic.
 
  • Like
Reactions: kenoh
Bridge doesn't keep a database. So every time you change folders it has to re-read all the metadata. For this reason it doesn't handle searches well. Try using bridge with 50,000 images.

I agree that a wider workflow helps, but that means that the robustness against external file operations is a 'must have' for the database.

One model I've looked at:
* Use photomechanic for tagging. It's superb at that. It writes into the files, and understands controlled vocabularies.
* Use exiftool to extract metadata for the database. Scripting could automate synchronization between database and sidecar files.
* use mysql as the database, and use command line tools to update it from changes in the file system.
* Use Affinity/Luminar/Photoshop/Lightroom as image editors.
* Use fswatch to monitor the file system

Don't have a good candidate for the browser searcher function. The search has to be able to read the database, OR the database must have a set of command line tools that can also access it.

This approach requires a fair amount of scripting, and is still likely to be somewhat fragile.
[doublepost=1535981892][/doublepost]

The first 10 paragraphs *do* give a summary. But, yeah. I hear you.

I do use Bridge with more than 50,000 images. Granted my needs are not as complex as yours.

Lightroom is the tool you need to look at as everyone else is saying. With Photomechanic for EXIF tweaks.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.