6 ways to recover from chmod a-x chmod

Someone did:

chmod a-x /bin/chmod

marking the chmod programm not executable, how can we repair this, when we can’t execute chmod?

  • copy/cat
  • cp /bin/echo /root/echo
    cat /bin/chmod > /root/echo
    /root/echo a+x /bin/chmod

  • rsync
  • rsync --chmod=a+x /bin/chmod /root/chmod
    /root/chmod a+x /bin/chmod

  • busybox
  • busybox chmod a+x /bin/chmod

  • gcc
  • cat > chmod.c
    int main() {
    char path[] = "/bin/chmod";
    chmod(path,00555);
    }
    ^D
    gcc chmod.c
    ./a.out

  • ld.so
  • /lib64/ld-linux-x86-64.so.2 /bin/chmod a+x /bin/chmod

  • setfacl
  • setfacl -m user::rwx /bin/chmod

    Found another solution? Let me know in the comments.

    Description/Explanation of zfs sysctl variables in FreeBSD

    I try to add an explanation about every variable, no guarantee for correctness, there are a lot of guesses involved, please comment below to correct me if Im wrong.

    vfs.zfs.l2c_only_size
    Amount of data only cached in the l2, not the arc.
    vfs.zfs.mfu_ghost_data_lsize
    The amount of data referenced by the mfu ghost list, since this is a ghost list, this data is not part of the arc.
    vfs.zfs.mfu_ghost_metadata_lsize
    Same as above but for metadata.
    vfs.zfs.mfu_ghost_size
    vfs.zfs.mfu_ghost_data_lsize+vfs.zfs.mfu_ghost_metadata_lsize
    vfs.zfs.mfu_data_lsize
    Data used in the cache for mfu data.
    vfs.zfs.mfu_metadata_lsize
    Data used in the cache for mfu metadata.
    vfs.zfs.mfu_size
    This is the size in bytes used by the “most frequently used cache” (data and metadata)
    vfs.zfs.mru_ghost_data_lsize
    The amount of data referenced by the mru ghost list, since this is a ghost list, this data is not part of the arc.
    vfs.zfs.mru_ghost_metadata_lsize
    Same as above but for metadata.
    vfs.zfs.mru_ghost_size
    vfs.zfs.mru_ghost_data_lsize+vfs.zfs.mru_ghost_metadata_lsize
    vfs.zfs.mru_data_lsize
    Data used in the cache for mru data.
    vfs.zfs.mru_metadata_lsize
    Data used in the cache for mru metadata.
    vfs.zfs.mru_size
    This is the size in bytes used by the “most recently used cache” (data and metadata)
    vfs.zfs.anon_data_lsize
    See vfs.zfs.anon_size this is the data part.
    vfs.zfs.anon_metadata_lsize
    See vfs.zfs.anon_size this is the metadata part.
    vfs.zfs.anon_size
    This is the amount of data in bytes in the cache used anonymously, these are bytes in the write buffer, which are not jet synced to disk.
    vfs.zfs.l2arc_norw
    Dont read data from the l2 cache while writing to it.
    vfs.zfs.l2arc_feed_again
    Control if the l2arc is feed vfs.zfs.l2arc_feed_secs (set to 0)
    or depending on the amount of data written is dynamicly adjusted between vfs.zfs.l2arc_feed_min_ms and vfs.zfs.l2arc_feed_secs. (set to 1)
    vfs.zfs.l2arc_noprefetch
    This controls if the zfs prefetcher (zfetch) should read data from the l2 arc when prefetching. It does not control if prefetched data is cached into l2. It only controls if the prefetcher uses the l2 arc to read from.
    vfs.zfs.l2arc_feed_min_ms
    Min time between l2 feeds (see vfs.zfs.l2arc_feed_again)
    vfs.zfs.l2arc_feed_secs
    Normal, max time between l2 feeds (see vfs.zfs.l2arc_feed_again)
    vfs.zfs.l2arc_headroom
    This value multiplied by vfs.zfs.l2arc_write_max, results in the scanning range for the arc feeder. The l2 feeder scans the tail of the 4 arc lists in the order mfu_meta,mru_meta,mfu_data and mru_data.
    On each list the tail is scanned for data which is not jet in the l2 cache. The scan stops if vfs.zfs.l2arc_write_max data is found. If vfs.zfs.l2arc_write_max*vfs.zfs.l2arc_headroom was scanned whithout new data exceeding l2arc_write_max, the next list is scanned.
    vfs.zfs.l2arc_write_boost
    Write limit for the l2 feeder directly after boot (before the first arc eviction happend)
    vfs.zfs.l2arc_write_max
    Write limit for the l2 feeder (see vfs.zfs.l2arc_feed_again allso)
    vfs.zfs.arc_meta_limit
    Limits the amount of the arc which can be used by metadata.
    vfs.zfs.arc_meta_used
    Size of data in the arc used by meta data (mru and mfu)
    vfs.zfs.arc_min
    Minimal size of the arc, it cat shrink to.
    vfs.zfs.arc_max
    Maximum size of the arc, it can grow to.
    vfs.zfs.dedup.prefetch
    Don’t know what this is for. sysctl says “Enable/disable prefetching of dedup-ed blocks which are going to be freed” ???
    vfs.zfs.mdcomp_disable
    Disable compression of metadata.
    vfs.zfs.write_limit_override
    Maximum amount of not jet written data (anon, dirty) in the cache, this Setting overrides the dynamic size which is calculated by the write_limit options below, and sets it to a fixed value instead.
    vfs.zfs.write_limit_inflated
    If vfs.zfs.write_limit_override is 0, this value is the maximum write limit which can be dynamicly set. It is calculated by multiplying vfs.zfs.write_limit_max with 24 (If a lot of redundancy is used in a pool, 1MB could result in 24 redundant MBs to be written, 24 is the precalculated worst case)
    vfs.zfs.write_limit_max
    This is used to get vfs.zfs.write_limit_inflated it is set to RAM / 2 ^ vfs.zfs.write_limit_shift
    vfs.zfs.write_limit_min
    Minimum write limit.
    vfs.zfs.write_limit_shift
    see vfs.zfs.write_limit_max
    vfs.zfs.no_write_throttle
    Disable the write throttle, applications can write at dram speed, until write_limit is reached, then writes are completly stalled until a new empty txg is available.
    vfs.zfs.zfetch.array_rd_sz
    This is the maximum amount of bytes the prefetcher will prefetch in advance.
    vfs.zfs.zfetch.block_cap
    This is the maximum amount of blocks the prefetcher will prefetch in advance.
    vfs.zfs.zfetch.min_sec_reap
    Not realy sure
    vfs.zfs.zfetch.max_streams
    The maximum number of streams a zfetch can handle, not sure it there could be multiple zfetches at work.
    vfs.zfs.prefetch_disable
    Disable the prefetch (zfetch) readahead feature.
    vfs.zfs.mg_alloc_failures
    Could be the maximum write errors per vdev before it’s taken offline?
    vfs.zfs.check_hostid
    ?
    vfs.zfs.recover
    Setting this to 1 tries to fix errors that would otherwise be fatal, don’t realy now what kinds of errors we talking about.
    vfs.zfs.txg.synctime_ms
    Try to keep txg commits shorter than this value, by shrinking the amount of data a txg can hold, this works together whith the write limit options above.
    vfs.zfs.txg.timeout
    Seconds betwen txg syncs (writes) to disk.
    vfs.zfs.vdev.cache.bshift
    This is a bit shift value, read requests smaller than vfs.zfs.vdev.cache.max will read 2^vfs.zfs.vdev.cache.bshift instead, (it dosn’t take longer to get this amount instead, and we might get a benifit later if we have this in the vdev cache)
    vfs.zfs.vdev.cache.size
    Size of the cache per vdev on the vdev level.
    vfs.zfs.vdev.cache.max
    See vfs.zfs.vdev.cache.bshift
    vfs.zfs.vdev.write_gap_limit
    It has something to do with two writes beeing merged into one if only this value (bytes?) is between them.
    vfs.zfs.vdev.read_gap_limit
    It has something to do with two reads beeing merged into one if only this value (bytes?) is between them.
    vfs.zfs.vdev.aggregation_limit
    It has something to do with two reads/writes beeing merged into one if the resulting read/write is below this amount of bytes?
    vfs.zfs.vdev.ramp_rate
    Freebsd sysctl says: “Exponential I/O issue ramp-up rate” you are kidding right?
    vfs.zfs.vdev.time_shift
    ?
    vfs.zfs.vdev.min_pending
    ?
    vfs.zfs.vdev.max_pending
    Maximum amount of requests in the per device queue.
    vfs.zfs.vdev.bio_flush_disable
    ?
    vfs.zfs.cache_flush_disable
    No idea what cache we are talking abount here, but it disables flushing to it :-/
    vfs.zfs.zil_replay_disable
    You can disable the replay of your zil logs, not sure why someone would want this, and not simply disable writing a zil?
    vfs.zfs.zio.use_uma
    It has something todo whith how memory is allocated.
    vfs.zfs.snapshot_list_prefetch
    Prefetch data when listing snapshots (speed up snapshot listing)
    vfs.zfs.version.zpl
    Maximum zfs version supported
    vfs.zfs.version.spa
    Maximum zpool version supported
    vfs.zfs.version.acl
    ?
    vfs.zfs.debug
    Set zfs debug level.
    vfs.zfs.super_owner
    This user-id can perform manage the filesystem.
    kstat.zfs.misc.xuio_stats.onloan_read_buf
    kstat.zfs.misc.xuio_stats.onloan_write_buf
    kstat.zfs.misc.xuio_stats.read_buf_copied
    kstat.zfs.misc.xuio_stats.read_buf_nocopy
    kstat.zfs.misc.xuio_stats.write_buf_copied
    kstat.zfs.misc.xuio_stats.write_buf_nocopy
    kstat.zfs.misc.zfetchstats.hits
    Counts the number of cache hits, to items wich are in the cache because of the prefetcher.
    kstat.zfs.misc.zfetchstats.misses
    kstat.zfs.misc.zfetchstats.colinear_hits
    Counts the number of cache hits, to items wich are in the cache because of the prefetcher (prefetched linear reads)
    kstat.zfs.misc.zfetchstats.colinear_misses
    kstat.zfs.misc.zfetchstats.stride_hits
    Counts the number of cache hits, to items wich are in the cache because of the prefetcher (prefetched stride reads)

    http://en.wikipedia.org/wiki/Stride_of_an_array

    kstat.zfs.misc.zfetchstats.stride_misses
    kstat.zfs.misc.zfetchstats.reclaim_successes
    kstat.zfs.misc.zfetchstats.reclaim_failures
    kstat.zfs.misc.zfetchstats.streams_resets
    kstat.zfs.misc.zfetchstats.streams_noresets
    kstat.zfs.misc.zfetchstats.bogus_streams
    kstat.zfs.misc.arcstats.hits
    Total amount of cache hits in the arc.
    kstat.zfs.misc.arcstats.misses
    Total amount of cache misses in the arc.
    kstat.zfs.misc.arcstats.demand_data_hits
    Amount of cache hits for demand data, this is what matters (is good) for your application/share.
    kstat.zfs.misc.arcstats.demand_data_misses
    Amount of cache misses for demand data, this is what matters (is bad) for your application/share.
    kstat.zfs.misc.arcstats.demand_metadata_hits
    Ammount of cache hits for demand metadata, this matters (is good) for getting filesystem data (ls,find,…)
    kstat.zfs.misc.arcstats.demand_metadata_misses
    Ammount of cache misses for demand metadata, this matters (is bad) for getting filesystem data (ls,find,…)
    kstat.zfs.misc.arcstats.prefetch_data_hits
    The zfs prefetcher tried to prefetch somethin, but it was allready cached (boring)
    kstat.zfs.misc.arcstats.prefetch_data_misses
    The zfs prefetcher prefetched something which was not in the cache (good job, could become a demand hit in the future)
    kstat.zfs.misc.arcstats.prefetch_metadata_hits
    Same as above, but for metadata
    kstat.zfs.misc.arcstats.prefetch_metadata_misses
    Same as above, but for metadata
    kstat.zfs.misc.arcstats.mru_hits
    Cache hit in the “most recently used cache”, we move this to the mfu cache.
    kstat.zfs.misc.arcstats.mru_ghost_hits
    Cache hit in the “most recently used ghost list” we had this item in the cache, but evicted it, maybe we should increase the mru cache size.
    kstat.zfs.misc.arcstats.mfu_hits
    Cache hit in the “most freqently used cache” we move this to the begining of the mfu cache.
    kstat.zfs.misc.arcstats.mfu_ghost_hits
    Cache hit in the “most frequently used ghost list” we had this item in the cache, but evicted it, maybe we should increase the mfu cache size.
    kstat.zfs.misc.arcstats.allocated
    New data is written to the cache.
    kstat.zfs.misc.arcstats.deleted
    Old data is evicted (deleted) from the cache.
    kstat.zfs.misc.arcstats.stolen
    kstat.zfs.misc.arcstats.recycle_miss
    kstat.zfs.misc.arcstats.mutex_miss
    kstat.zfs.misc.arcstats.evict_skip
    kstat.zfs.misc.arcstats.evict_l2_cached
    We evicted something from the arc, but its still cached in the l2 if we need it.
    kstat.zfs.misc.arcstats.evict_l2_eligible
    We evicted something from the arc, and it’s not in the l2 this is sad. (maybe we hadn’t had enough time to store it there)
    kstat.zfs.misc.arcstats.evict_l2_ineligible
    We evicted something which cannot be stored in the l2.
    Reasons could be:
    We have multiple pools, we evicted something from a pool whithot an l2 device.
    The zfs property secondarycache.
    kstat.zfs.misc.arcstats.hash_elements
    kstat.zfs.misc.arcstats.hash_elements_max
    kstat.zfs.misc.arcstats.hash_collisions
    kstat.zfs.misc.arcstats.hash_chains
    kstat.zfs.misc.arcstats.hash_chain_max
    kstat.zfs.misc.arcstats.p
    kstat.zfs.misc.arcstats.c
    Arc target size, this is the size the system thinks the arc should have.
    kstat.zfs.misc.arcstats.c_min
    kstat.zfs.misc.arcstats.c_max
    kstat.zfs.misc.arcstats.size
    Total size of the arc.
    kstat.zfs.misc.arcstats.hdr_size
    kstat.zfs.misc.arcstats.data_size
    kstat.zfs.misc.arcstats.other_size
    kstat.zfs.misc.arcstats.l2_hits
    Hits to the L2 cache. (It was not in the arc, but in the l2 cache)
    kstat.zfs.misc.arcstats.l2_misses
    Miss to the L2 cache. (It was not in the arc, and not in the l2 cache)
    kstat.zfs.misc.arcstats.l2_feeds
    kstat.zfs.misc.arcstats.l2_rw_clash
    kstat.zfs.misc.arcstats.l2_read_bytes
    kstat.zfs.misc.arcstats.l2_write_bytes
    kstat.zfs.misc.arcstats.l2_writes_sent
    kstat.zfs.misc.arcstats.l2_writes_done
    kstat.zfs.misc.arcstats.l2_writes_error
    kstat.zfs.misc.arcstats.l2_writes_hdr_miss
    kstat.zfs.misc.arcstats.l2_evict_lock_retry
    kstat.zfs.misc.arcstats.l2_evict_reading
    kstat.zfs.misc.arcstats.l2_free_on_write
    kstat.zfs.misc.arcstats.l2_abort_lowmem
    kstat.zfs.misc.arcstats.l2_cksum_bad
    kstat.zfs.misc.arcstats.l2_io_error
    kstat.zfs.misc.arcstats.l2_size
    Size of the l2 cache.
    kstat.zfs.misc.arcstats.l2_hdr_size
    Size of the metadata in the arc (ram) used to manage (lookup if someting is in the l2) the l2 cache.
    kstat.zfs.misc.arcstats.memory_throttle_count
    kstat.zfs.misc.arcstats.l2_write_trylock_fail
    kstat.zfs.misc.arcstats.l2_write_passed_headroom
    kstat.zfs.misc.arcstats.l2_write_spa_mismatch
    kstat.zfs.misc.arcstats.l2_write_in_l2
    kstat.zfs.misc.arcstats.l2_write_io_in_progress
    kstat.zfs.misc.arcstats.l2_write_not_cacheable
    kstat.zfs.misc.arcstats.l2_write_full
    kstat.zfs.misc.arcstats.l2_write_buffer_iter
    kstat.zfs.misc.arcstats.l2_write_pios
    kstat.zfs.misc.arcstats.l2_write_buffer_bytes_scanned
    kstat.zfs.misc.arcstats.l2_write_buffer_list_iter
    kstat.zfs.misc.arcstats.l2_write_buffer_list_null_iter
    kstat.zfs.misc.vdev_cache_stats.delegations
    kstat.zfs.misc.vdev_cache_stats.hits
    Hits to the vdev (device level) cache.
    kstat.zfs.misc.vdev_cache_stats.misses
    Misses to the vdev (device level) cache.

    Mediatomb config for Philips 32PFL3807k/02

    I could not find a good config online, so I experimented a bit.

    This config transcodes as little as possible, play, pause, forward, rewind works for Media not transcoded.

    Using the latest git version of Mediatomb pause is possible in transcoded streams:

    git://mediatomb.git.sourceforge.net/gitroot/mediatomb/mediatomb
    commit 27da70598ba9b1d9c4431f3b03bc5e460480abb2
    Date: Tue Dec 24 21:07:23 2013 +0100
    Add support for play/pause/chapters in transcoded streams

    
    <?xml version="1.0" encoding="UTF-8"?>
    <config version="2" xmlns="http://mediatomb.cc/config/2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://mediatomb.cc/config/2 http://mediatomb.cc/config/2.xsd">
      <server>
        <interface>alc0</interface>
        <port>50000</port>
        <ui enabled="yes" show-tooltips="yes">
          <accounts enabled="no" session-timeout="30">
            <account user="mediatomb" password="mediatomb"/>
          </accounts>
        </ui>
        <name>MediaTomb</name>
        <udn>uuid:9bf4f28f-0f37-4ccc-bacb-f6cdf8eadd5a</udn>
        <home>/var/mediatomb</home>
        <webroot>/usr/local/share/mediatomb/web</webroot>
        <storage caching="yes">
          <sqlite3 enabled="yes">
            <database-file>mediatomb.db</database-file>
          </sqlite3>
          <mysql enabled="no">
            <host>localhost</host>
            <username>mediatomb</username>
            <database>mediatomb</database>
          </mysql>
        </storage>
        <protocolInfo extend="yes"/>
        <extended-runtime-options>
          <ffmpegthumbnailer enabled="yes">
          <thumbnail-size>128</thumbnail-size>
            <seek-percentage>5</seek-percentage>
            <filmstrip-overlay>yes</filmstrip-overlay>
            <workaround-bugs>no</workaround-bugs>
            <image-quality>8</image-quality>
          </ffmpegthumbnailer>
          <mark-played-items enabled="no" suppress-cds-updates="yes">
            <string mode="prepend">*</string>
            <mark>
              <content>video</content>
            </mark>
          </mark-played-items>
        </extended-runtime-options>
      </server>
      <import hidden-files="no">
        <scripting script-charset="UTF-8">
          <common-script>/usr/local/share/mediatomb/js/common.js</common-script>
          <playlist-script>/usr/local/share/mediatomb/js/playlists.js</playlist-script>
          <virtual-layout type="builtin">
            <import-script>/usr/local/share/mediatomb/js/import.js</import-script>
          </virtual-layout>
        </scripting>
        <mappings>
          <extension-mimetype ignore-unknown="no">
            <map from="mp3" to="audio/mpeg"/>
            <map from="ogg" to="application/ogg"/>
            <map from="mpg" to="video/mpeg"/>
            <map from="mpeg" to="video/mpeg"/>
            <map from="vob" to="video/mpeg"/>
            <map from="vro" to="video/mpeg"/>
            <map from="m2ts" to="video/avc"/>
            <map from="mts" to="video/avc"/>
            <map from="asf" to="video/x-ms-asf"/>
            <map from="asx" to="video/x-ms-asf"/>
            <map from="wma" to="audio/x-ms-wma"/>
            <map from="wax" to="audio/x-ms-wax"/>
            <map from="wmv" to="video/x-ms-wmv"/>
            <map from="wvx" to="video/x-ms-wvx"/>
            <map from="wm" to="video/x-ms-wm"/>
            <map from="wmx" to="video/x-ms-wmx"/>
            <map from="m3u" to="audio/x-mpegurl"/>
            <map from="pls" to="audio/x-scpls"/>
            <map from="flv" to="video/x-flv"/>
          </extension-mimetype>
          <mimetype-upnpclass>
            <map from="audio/*" to="object.item.audioItem.musicTrack"/>
            <map from="video/*" to="object.item.videoItem"/>
            <map from="image/*" to="object.item.imageItem"/>
          </mimetype-upnpclass>
          <mimetype-contenttype>
            <treat mimetype="audio/mpeg" as="mp3"/>
            <treat mimetype="application/ogg" as="ogg"/>
            <treat mimetype="audio/x-flac" as="flac"/>
            <treat mimetype="image/jpeg" as="jpg"/>
            <treat mimetype="audio/x-mpegurl" as="playlist"/>
            <treat mimetype="audio/x-scpls" as="playlist"/>
            <treat mimetype="audio/x-wav" as="pcm"/>
            <treat mimetype="video/x-msvideo" as="avi"/>
          </mimetype-contenttype>
        </mappings>
      </import>
      <transcoding enabled="yes">
        <mimetype-profile-mappings>
          <transcode mimetype="video/divx" using="multifunctional"/>
          <transcode mimetype="video/x-msvideo" using="multifunctional"/>
        </mimetype-profile-mappings>
        <profiles>
          <profile name="multifunctional" enabled="yes" type="external">
            <mimetype>video/mpeg</mimetype>
            <first-resource>yes</first-resource>
            <hide-original-resource>yes</hide-original-resource>
            <avi-fourcc-list mode="process">
              <fourcc>DX50</fourcc>
            </avi-fourcc-list>
            <agent command="/usr/local/bin/mediatomb-multifunctional.sh" arguments="%in %out"/>
            <buffer size="102400" chunk-size="51200" fill-size="20480"/>
          </profile>
        </profiles>
      </transcoding>
    <custom-http-headers>
          <add header="transferMode.dlna.org: Streaming"/>
          <add header="contentFeatures.dlna.org: DLNA.ORG_OP=01;DLNA.ORG_CI=0;DLNA.ORG_FLAGS=025000 00000000000000000000000000"/>
    </custom-http-headers>
    </config>
    

    I use mediatomb-multifunctional.sh from https://vanalboom.org/node/16.

    How the zfs l2 arc works

    I’m writing this, because I found it difficult to find a complete description of how things work in the zfs l2 cache. Some informations are very easy to find, but seem to lack details. Here is how I believe it works:

    1. Format of l2:
    Every device in the l2 cache is a ring buffer, if new data is written, the oldest data is dropped/overwritten. There are no other priorities to what is dropped. First written is first dropped.
    The l2 is no arc, it has only one list which is feed from the arc, it does not adapt in any way, caching priorities are fix (see search order below).

    2. Populating the l2:
    The l2 is populated by scanning the tail end of the regular (in memory) arc lists up to a certain depth.
    A new scan is initiated every vfs.zfs.l2arc_feed_secs, it scans until it has found vfs.zfs.l2arc_write_max bytes, eligible for l2 (Not allready in L2, not locked etc.).
    Each list is scanned from the tail up to vfs.zfs.l2arc_write_max bytes * vfs.zfs.l2arc_headroom.
    The Arc lists tails are searched in the flowing order:
    MFU Metadata -> MRU Metadata -> MFU Data -> MRU Data
    So the MRU Data list is only searched if there is less then vfs.zfs.l2arc_write_max bytes in the other lists tails.
    If a scan finds vfs.zfs.l2arc_write_max bytes in the scanned data, it is written to L2.
    Because the scan only starts every vfs.zfs.l2arc_feed_secs and writes a maximum of vfs.zfs.l2arc_write_max bytes this effectively limits the write bandwidth to the l2 devices.
    If multiple l2 devices are used, data is written round-robin to the devices. (which means that if they are unequal in size it is more or less random how long data is cached depending on which device the data was written to).

    3. Cache hits in l2:
    If data is not in the arc, but in l2, it is read from l2, and cached in the arc as if it would have been read from the primary disks. Nothing happens to the data in l2, it could be evicted shortly after the Hit (but it is in the arc then, and will probably written to the l2 again before it is evicted from arc)

    Links:

    https://blogs.oracle.com/brendan/entry/test
    http://mirror-admin.blogspot.de/2011/12/how-l2arc-works.html
    http://src.illumos.org/source/xref/freebsd-head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c

    Foto-Rechner

    Es gibt nun 1.000.000 + 1 Online Foto-Rechner, ich hab mir einen gebaut, der an meine Bedürfnisse angepasst ist.
    Sollte für alle Kameras mit 1.5er Crop passen.
    Keine Garantie, dass das Ding korrekt rechnet, wenn euch ein Fehler auffällt, sagt bescheid.

    Der Unterschied zwischen Lux und Lumen

    Lumen:
    Lumen ist die Maßeinheit für die Lichtmenge die von einer Lichtquelle aus geht.
    Beispiel: Die Lampe erzeugt 100 Lumen.
    Wasserbeispiel:
    Liter die pro Minute aus einem Wasserhahn kommen.

    Lux:
    Lux ist die Maßeinheit mit der die Lichtmenge, welche auf einen Punkt trifft gemessen wird.
    Beispiel: Bei 5m Entfernung hat die Mitte des Lichtkegels 100 Lux
    Wasserbeispiel:
    Füllstand eines Eimers nach einer Minute im Regen.

    SSH/Shell access to iomega StorCenter ix2

    I recently got the “iomega StorCenter ix2″ it’s a little NAS for home or small office use.
    Soon it was clear to me that it runs Linux, and a Linux device whithout shell access is hard to bear.
    After googeling for a day, I found nothing on this subject which would work whith a recent firmware version (2.0.15.43099).
    So here is what I did to get access:

    I opened the case to get direct access to the S-ATA HDs, then I connected the HDs to my Linux PC.
    After booting up, I could see how it is configured:

    My PC detected the 2 HDs as /dev/sdb and /dev/sdc.
    Each HD contains 2 Linux software-raid partitions.
    The first raid partition (1GB) is always raid1 and contains the firmware.
    The second raid partition is raid1 or linear-raid; this is configurable whith the web interface.

    After assembling the first raid with
    mdadm /dev/md0 /dev/sdb1 /dev/sdc1
    I could mount /dev/md0 to /mnt
    mount /dev/md0 /mnt/md0
    (the filesystem is ext2).
    The mounted filesystem contained:

    # ls -lh
    drwxr-xr-x 2 root root 4.0k Mar 14 16:52 images
    drwx------ 2 root root 16.0k Mar 14 15:00 lost+found
    -rwx------ 1 root root 512.0M Mar 14 16:54 swapfile
    # ls -lh images/
    -rw-r--r-- 1 root root 163.0M Jun 25 20:37 apps
    -rw-r--r-- 1 root root 5.0M Mar 14 15:03 config
    -rw-r--r-- 1 root root 416.0k Jun 25 20:37 oem

    The files in images/ looked like they contained what I was searching for. To find out the filetype I used file:

    # file images/*
    images/apps: Linux rev 0.0 ext2 filesystem data
    images/config: Linux rev 0.0 ext2 filesystem data
    images/oem: Linux Compressed ROM File System data, little endian size 425984 version #2 sorted_dirs CRC 0xd3a158e1, edition 0, 222 blocks, 34 files

    That meant that I could simply mount the config and apps file, as they contained an ext2 filesystem.

    mount -o loop /mnt/md0/images/config /mnt/config

    This image file contained the /etc directory of the storage.
    Now I could edit the configfiles and changed the following files:

    Activate ssh:

    init.d/S50ssh

    There I changed:

    start() {
    echo -n "Starting sshd: "
    #/usr/sbin/sshd
    #touch /var/lock/sshd
    echo "OK"
    }
    stop() {
    echo -n "Stopping sshd: "
    #killall sshd
    #rm -f /var/lock/sshd
    echo "OK"
    }

    To:

    start() {
    echo -n "Starting sshd: "
    /usr/sbin/sshd
    touch /var/lock/sshd
    echo "OK"
    }
    stop() {
    echo -n "Stopping sshd: "
    killall sshd
    rm -f /var/lock/sshd
    echo "OK"
    }


    sshd_config

    Changed:

    Subsystem sftp /usr/sbin/sftp-server

    To:

    #Subsystem sftp /usr/sbin/sftp-server

    To set a password I simply copied the hash from an account of my PC to the shadow file.

    shadow
    root:
    Hash from my PCs account:10933:0:99999:7:::

    After unmounting all disks, shutting down my PC, reconnecting the drives to the StorCenter and switching it on, I had access:

    Starting Nmap 4.76 ( http://nmap.org ) at 2009-06-27 11:15 CEST
    Interesting ports on storage (192.168.2.11):
    PORT STATE SERVICE
    22/tcp open ssh
    MAC Address: 00:D0:B8:03:0B:33 (Iomega)

    Nmap done: 1 IP address (1 host up) scanned in 0.34 seconds

    ssh root@storage
    root@storage's password:

    BusyBox v1.8.2 (2009-01-09 09:01:03 EST) built-in shell (ash)
    Enter 'help' for a list of built-in commands.

    #

    Some impressions from the comandline:

    # mount
    rootfs on / type rootfs (rw)
    /dev/root.old on /initrd type ext2 (rw)
    none on / type tmpfs (rw)
    /dev/md0 on /boot type ext2 (rw)
    /dev/loop0 on /mnt/apps type ext2 (ro)
    /dev/loop1 on /etc type ext2 (rw)
    /dev/loop2 on /oem type cramfs (ro)
    proc on /proc type proc (rw)
    none on /proc/bus/usb type usbfs (rw)
    none on /sys type sysfs (rw)
    devpts on /dev/pts type devpts (rw)
    /dev/md1 on /mnt/soho_storage type ext3 (rw,noatime,data=ordered)
    /dev/sdc1 on /mnt/soho_storage/samba/shares/conny type vfat (rw,fmask=0000,dmask=0000,codepage=cp437,iocharset=utf8)
    /dev/sdd1 on /mnt/soho_storage/samba/shares/micha type ext3 (rw,data=ordered)

    # df
    Filesystem Size Used Available Use% Mounted on
    /dev/root.old 3.7M 1.1M 2.5M 30% /initrd
    none 61.8M 2.9M 58.9M 5% /
    /dev/md0 980.4M 845.5M 85.1M 91% /boot
    /dev/loop0 162.3M 135.7M 18.5M 88% /mnt/apps
    /dev/loop1 4.8M 754.0k 3.9M 16% /etc
    /dev/loop2 888.0k 888.0k 0 100% /oem
    /dev/md1 922.2G 118.8G 794.1G 13% /mnt/soho_storage
    /dev/sdc1 232.8G 201.3G 31.5G 86% /mnt/soho_storage/samba/shares/conny
    /dev/sdd1 275.1G 549.0M 260.6G 0% /mnt/soho_storage/samba/shares/micha

    # cat /proc/mdstat
    Personalities : [raid1] [raid10] [linear]
    md1 : active linear sda2[0] sdb2[1]
    974727680 blocks 0k rounding

    md0 : active raid1 sda1[0] sdb1[1]
    1020032 blocks [2/2] [UU]

    unused devices:

    # cat /proc/cpuinfo
    Processor : ARM926EJ-S rev 0 (v5l)
    BogoMIPS : 266.24
    Features : swp half thumb fastmult edsp
    CPU implementer : 0x41
    CPU architecture: 5TEJ
    CPU variant : 0x0
    CPU part : 0x926
    CPU revision : 0
    Cache type : write-back
    Cache clean : cp15 c7 ops
    Cache lockdown : format C
    Cache format : Harvard
    I size : 32768
    I assoc : 1
    I line length : 32
    I sets : 1024
    D size : 32768
    D assoc : 1
    D line length : 32
    D sets : 1024

    Hardware : Feroceon
    Revision : 0000
    Serial : 0000000000000000

    # iostat
    sda sdb md0 md1 sdc sdd cpu
    kps tps svc_t kps tps svc_t kps tps svc_t kps tps svc_t kps tps svc_t kps tps svc_t us sy wt id
    23 1 4.4 676 15 4.1 24 2 0.0 668 122 0.0 4 1 3.5 2 0 9.9 25 12 13 50

    # sdparm -C stop /dev/sdc
    /dev/sdc: ST325082 0A 3.AA

    # rsync -aPh mk@schreibtisch:/home/mk/Desktop/foodir /mnt/soho_storage/samba/shares/micha/Desktop
    receiving file list ...
    4 files to consider
    foodir/
    foodir/foofile1
    0 100% 0.00kB/s 0:00:00 (xfer#1, to-check=2/4)
    foodir/foofile2
    0 100% 0.00kB/s 0:00:00 (xfer#2, to-check=1/4)
    foodir/foofile3
    0 100% 0.00kB/s 0:00:00 (xfer#3, to-check=0/4)

    sent 92 bytes received 247 bytes 678.00 bytes/sec
    total size is 0 speedup is 0.00

    # lv
    lvchange lvdisplay lvm lvmdiskscan lvmsar lvremove lvresize lvscan
    lvcreate lvextend lvmchange lvmsadc lvreduce lvrename lvs
    # pv
    pvchange pvcreate pvdisplay pvmove pvremove pvresize pvs pvscan
    # vg
    vgcfgbackup vgchange vgconvert vgdisplay vgextend vgmerge vgreduce vgrename vgscan
    vgcfgrestore vgck vgcreate vgexport vgimport vgmknodes vgremove vgs vgsplit

    # top
    Mem: 124424K used, 2248K free, 0K shrd, 8588K buff, 89860K cached
    CPU: 53% usr 30% sys 0% nice 7% idle 0% io 0% irq 7% softirq
    Load average: 1.34 0.96 1.79
    PID PPID USER STAT VSZ %MEM %CPU COMMAND
    18683 18682 root S 4916 4% 65% ssh krausam.de rsync --server --sender -vlogDtpr . /mnt/programme
    55 2 root SW 0 0% 10% [pdflush]
    1338 31651 root R 2820 2% 7% top
    26256 740 root S < 352m 284% 5% /usr/sbin/appweb -r /usr/local/appweb -f appweb.conf 18709 18682 root S 6300 5% 5% rsync -aPh krausam.de:/mnt/programme ./ 839 740 root S 68312 54% 0% /usr/sbin/upnpd -webdir /etc/upnpd/web 740 1 root S 11100 9% 0% /sbin/executord -c /etc/sohoConfig.xml 1790 740 root S 8276 7% 0% /usr/local/samba/sbin/smbd -F 1833 1790 root S 8276 7% 0% /usr/local/samba/sbin/smbd -F 17127 672 root S 7240 6% 0% sshd: root@pts/1 31634 672 root S 7080 6% 0% sshd: root@pts/2